text
stringlengths
199
648k
id
stringlengths
47
47
dump
stringclasses
1 value
url
stringlengths
14
419
file_path
stringlengths
139
140
language
stringclasses
1 value
language_score
float64
0.65
1
token_count
int64
50
235k
score
float64
2.52
5.34
int_score
int64
3
5
Everglades National Park is a U.S. National Park in Florida that protects the southern 20 percent of the original Everglades. In the United States, it is the largest tropical wilderness, the largest wilderness of any kind east of the Mississippi River, and is visited on average by one million people each year. It is the third-largest national park in the lower 48 states after Death Valley and Yellowstone. It has been declared an International Biosphere Reserve, a World Heritage Site, and a Wetland of International Importance, one of only three locations in the world to appear on all three lists. Although most U.S. national parks preserve unique geographic features, Everglades National Park was the first created to protect a fragile ecosystem. The Everglades are a network of wetlands and forests fed by a river flowing .25 miles (0.40 km) per day out of Lake Okeechobee, southwest into Florida Bay. The Park is the most significant breeding ground for tropical wading birds in North America, contains the largest mangrove ecosystem in the western hemisphere, is home to 36 threatened or protected species including the Florida panther, the American crocodile, and the West Indian manatee, and supports 350 species of birds, 300 species of fresh and saltwater fish, 40 species of mammals, and 50 species of reptiles. The majority of South Florida's fresh water, which is stored in the Biscayne Aquifer, is recharged in the park. Humans have lived for thousands of years in or around the Everglades, until plans arose in 1882 to drain the wetlands and develop the recovered land for agricultural and residential use. As the 20th century progressed, water flow from Lake Okeechobee was increasingly controlled and diverted to enable explosive growth of the South Florida metropolitan area. The park was established in 1934 to protect the quickly vanishing Everglades, and dedicated in 1947 as massive canal building projects were initiated across South Florida. The ecosystems in Everglades National Park have suffered significantly from human activity, and restoration of the Everglades is a politically charged issue in South Florida.
<urn:uuid:87d8ef6f-d512-4f26-9e55-6c8d76829341>
CC-MAIN-2016-26
http://www.breakingnews.com/topic/whales-stranded-in-everglades-national-park/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395548.53/warc/CC-MAIN-20160624154955-00186-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940841
433
3.859375
4
World Mental Health Day draws attention to mental health and older adults On Thursday, October 10, 2013, mental health organizations across the globe will celebrate World Mental Health Day. Led by the World Federation of Mental Health, World Mental Health Day is supported by the World Health Organization as an important day to raise awareness and advocate for better care for those with mental health issues worldwide. This year’s theme is mental health and older adults, in recognition of the fact that the growing proportion of the world’s people are reaching old age as a result of improving health care and living standards. Unfortunately, the increased health risks associated with old age leave older people at risk for mental health issues. Long-term conditions such as heart conditions, cancer and diabetes are known to place people at risk for mental disorders, with an increased risk of depression, anxiety and substance abuse. Dementia is also a rising problem as more people live into old age. The social determinants of health are particularly pertinent to older adults, as they are more at risk for factors such as loss of independence, poverty, and social isolation which can affect emotional well-being and result in poorer mental health. In Canada, depression is the most common mental health problem for older adults, with substantial depressive symptoms affecting an estimated 15 percent of those living in the community and up to 44 percent of residents in long-term care homes. Research also shows that men aged 80 and older have the highest suicide rate in Canada. To help spread awareness about the growing impact of mental health issues across all ages, CMHA Ontario has created the following infographic to share. Click on the image below to download and share. You can read more about World Mental Health Day on the World Federation of Mental Health website.
<urn:uuid:3fd90cf5-5ca0-4e72-b044-719e7681909d>
CC-MAIN-2016-26
http://ontario.cmha.ca/news/world-mental-health-day-draws-attention-to-mental-health-and-older-adults/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00011-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956651
357
3.390625
3
Syracuse, NY -- The Legionella bacteria has been found again in the tap water supply at the Onondaga County-owned Van Duyn Home & Hospital nursing home, prompting officials there to provide bottled water to residents at risk of becoming ill from the bacteria. Dr. Cynthia Morrow, Onondaga County health commissioner, said the bacteria is commonly found in soil and water supplies. It can cause pneumonialike symptoms, known as Legionnaires’ disease, if it gets into a person’s lungs. Morrow said the bacteria poses no threat in normal concentrations to healthy people but can be a threat to persons whose health already is compromised by other medical conditions, especially severe lung diseases. She said a resident of the nursing home who died in April had the bacteria, but that it is not known if it was the cause of the person’s death. A second Van Duyn resident who contracted the bacteria and became ill earlier this year has recovered, she said. A Van Duyn resident who died last summer also had the bacteria, but it also is not known if it caused that person's death, she said. Morrow said it is not known whether the residents got the bacteria from the tap water at the nursing home or from other sources. There are five to 15 cases of people becoming sick from Legionella in Onondaga County each year, and in most cases the source of the bacteria cannot be identified, she said. This is not the first time that the bacteria has been found in the water at Van Duyn. In the summer of 2008, an outbreak of Legionnaires' disease in the Onondaga Hill area sickened 13 people, one of whom died. Public health officials pinpointed the source of that outbreak to an air-conditioning cooling tower at Community General Hospital. During the course of that investigation, public health officials discovered Legionella in Van Duyn's water system. Since then, Van Duyn has been working in consultation with the state Department of Health to ensure the nursing home's water supply is safe, Morrow said. The nursing home routinely tests its tap water, she said. Legionnaires’ disease and the bacteria that causes it got their names in 1976, when dozens of people at a Philadelphia convention of the American Legion suffered from an outbreak of the disease and 34 of them died. According to the federal Centers for Disease Control and Prevention, Legionella is found naturally in the environment and grows best in warm water, like the kind found in hot tubs, cooling towers, hot water tanks, large plumbing systems and parts of the air-conditioning systems of large buildings. People get Legionnaires’ disease when they breathe in a mist or vapor that has been contaminated with the bacteria. It can be treated with antibiotics, but up to 30 percent of patients who contract the disease die from it. Those most at risk of getting sick from the bacteria are the elderly, smokers, people with chronic lung diseases and people with weakened immune systems. Reach Rick Moriarty at [email protected] or (315) 470-3148.
<urn:uuid:02eb6a06-d2af-4cb7-9412-0d633501df69>
CC-MAIN-2016-26
http://www.syracuse.com/news/index.ssf/2010/06/legionella_bacteria_found_in_t.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394937.4/warc/CC-MAIN-20160624154954-00040-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968509
642
2.5625
3
The drive for more sustainable, or eco-friendly, products is leading to a host of new collaborations and innovations by chemical manufacturers. BASF, Ludwigshafen, Germany, exemplifies the trend. It is one of 17 partners involved in BIONEXGEN (Next Generation of Biocatalysts), a research project supported by the European Union. The three-year project draws on both industrial and academic expertise and aims to develop a new generation of biocatalysts for more sustainable production processes in the chemical industry. The partners have identified four key technology areas: amine synthesis, polymers from renewable resources, glycoscience, and wider oxidase applications. BASF, which is investing €1.3 million ($1.7 million) of its own money alongside €600,000 ($791,000) of EU funds from the European Research FP7 program, is particularly focused on projects that involve the biocatalytic synthesis of amines and the use of enzymes in the manufacturing of functional polymers (Figure 1). Commenting on the overall project, Dr. Kai Baldenius, head of biocatalysis research at BASF and the man with responsibility for BIONEXGEN, says, "It is a general target of BASF to make processes more efficient. In this EU project, we engage in early phase research, searching for new, highly selective biocatalysts." Amines are among the most important family of compounds produced by chemical makers — and used in bulk for manufacture of pharmaceuticals, agrochemicals, polymers and speciality chemicals. Traditional routes to amines often rely on toxic metal reagents and catalysts that mandate costly protective measures and produce wasteful byproducts. Biocatalytics offer great potential to reduce cost and the amount of waste products. Research here will focus on three enzyme classes: monoamine oxidases, ammonia lyases and transaminases. BASF's amine synthesis goals within BIONEXGEN will build on existing experience, notes Baldenius: "BASF has successfully commercialized the biocatalytic production of enantiomerically pure chiral amines for the pharma and agro industry. We now strive to extend the scope of biocatalytic amine production to a broader range of amines. How far this scope can be extended remains unknown at this very early stage." Glycoscience and oligosaccharide synthesis is one of the most challenging disciplines in organic synthesis, often requiring complex protection and reaction strategies. Currently biotechnology can serve in a few glycosynthetic and oligosaccharide transformation methods. However, these methods aren't yet routinely applied industrially, although it's acknowledged that enzymatic systems offer great potential for selective synthesis. This part of the project is developing methods that will prove valuable for simplifying the synthesis of molecules of pharmaceutical, nutraceutical and household chemical interest. Researchers at the University of Manchester, Manchester, U.K., and BASF are combining biology, chemistry and molecular biology to synthesize a variety of glycoproteins, glycolipids and polysaccharides, all of which are important molecules in medicine and nutrition. BASF also is using BIONEXGEN to build on its existing knowledge on the applications of enzymes to glycoscience and oligosaccharide synthesis. Baldenius explains: "Polysaccharides such as starch are a masterpiece of nature's synthetic power. We believe that clever derivatization can provide them with properties usually found in petro-derived polymers, so that the biopolymers can be used, for example, as dispersants. The framework of BIONEXGEN is to identify new enzymes and basic fields of application. Following proof-of-concept, the actual product development would then be carried out at BASF, or one of the other industrial partners." In early November 2011, the French National Center for Scientific Research, Rhodia, the Ecole Normale Supérieure of Lyon and the East China Normal University officially opened the Laboratory of Eco-efficient Products and Processes (E2P2), an international joint research unit in Shanghai, China. Based at Rhodia's research center in Shanghai, the new laboratory is dedicated to developing eco-efficient chemical processes. It will house research jointly carried out by scientists from academic institutes in China and Europe working together with industrial partners. "All our research projects are assessed by a methodology based on the principles of lifecycle analysis. At a very early stage in designing a product or process, this analysis validates the pursuit of a research project if the results reveal a clear benefit with respect to human health and environment," explains E2P2 laboratory director Floryan Decampo, who comes to the lab from Rhodia. "Our E2P2 lab is another step further in this effort by targeting specifically new technologies capable of reducing significantly the use of fossil raw materials in specialty chemicals and hence reducing the carbon footprint of both our products and processes." Decampo says that partnering with top academic institutions is key because the projects being carried out in Shanghai typically pose significant scientific challenges and may require breakthrough innovations. "The unique feature of this lab is that it assembles within the same team experts in different key competencies — including chemistry, polymers, catalysis, industrial, theoretical and eco-efficiency — allowing them to quickly tackle key challenges and deliver faster solutions." Although Rhodia has other research centers around the world, Decampo says China was chosen for this latest investment — none of the financial details have been revealed — for three main reasons: Rhodia has a long presence in China, which is a key area for chemical industry growth; the country is facing some major environmental challenges, in part from the fast development of its chemical industry; and, with the rise of Chinese academic research to world-class stature, being in China presents a unique opportunity to develop strong partnerships with some of the best laboratories in their respective fields. The center will help Rhodia with its "Rhodia Way" set of sustainability commitments, which include cutting water use by 10% between 2010 and 2015, decreasing energy consumption by 8% in the same period, and developing products that contribute to a reduced carbon footprint — for example, eco-friendly solvents, and plant-based products for body and hair applications. "E2P2 is another strong initiative and a long-term commitment to sustainable development by ensuring that the new chemistries that will be developed in the future are eco-friendly and deliver significant environmental benefits. Thus, even upstream research is now focused and committed to sustainable development," notes Decampo. The first projects will focus largely on carbon-based products, mainly surfactants or plastics — with the hope that the associated technologies will also deliver new businesses for the Rhodia group as a whole. "Most of the projects have two firm targets, one environmental and another one economical. For example, for the projects aiming at replacing oil-based raw materials by bio-sourced raw materials, the target is to reduce by 30–50% the overall carbon footprint of products compared to existing industrial benchmarks. Of course, to achieve a full benefit and to replace the existing technologies, there must also be an economical target that is realistic." Decampo anticipates that moving new technologies from the laboratory to industrial scale will take between two and ten years, depending on the particular project. Meanwhile, Mitsubishi Rayon Company and subsidiary group Lucite International, both part of the Mitsubishi Chemical Holdings Corporation, Tokyo, are continuing their drive for innovation by developing sustainable feedstock sources for producing methyl methacrylate (MMA). They plan to use sustainable feedstocks for commercial MMA production by 2016, and to get at least 50% of their MMA output from these sources as soon after that as possible. To achieve this, the companies are investing in two approaches: using renewable feedstock sources as raw materials in existing processes, and developing novel routes for producing methacrylate monomers directly from renewable sources. Simultaneously, the companies will continue to innovate in catalysis and process technology to reduce resources consumed per unit of output in all of their activities. "In terms of alternative feedstocks, in the short term, there are some potential bio-based feedstocks for Mitsubishi Rayon Group's existing MMA plant, including acetone, ethylene (from ethanol) and isobutylene (from isobutanol). In the long term, carbohydrates are the most promising feedstocks," says spokesman Hiro Naitou. A number of new processes are being advanced in parallel, with one, which he declines to identify, already at the scale-up stage. Naitou and others from Mitsubishi Rayon are named in U.S. Patent 7,557,061, which outlines a method for producing a catalyst containing molybdenum and phosphorus for use in synthesizing MMA through gas-phase catalytic oxidation of methacrolein with molecular oxygen. According to Naitou, this latest initiative takes a different direction: "The technology of sustainable MMA is altogether different from previous catalysts and processes. Mitsubishi Rayon Group is doing the R&D at corporate research laboratories in Japan and U.K." In December, Toray Industries, Tokyo, announced it had produced laboratory-scale samples of the world's first fully renewable polyethylene terephthalate (PET) fiber by using PET derived from bio-based paraxylene supplied by Gevo, Eaglewood, Colo. Gevo converts isobutanol produced from biomass into paraxylene via a production method that uses synthetic biology in a conventional commercial chemical process. Toray made PET from terephthalic acid synthesized from Gevo's paraxylene and commercially available renewable monoethylene glycol by applying a new technology and polymerization. This bio-based PET exhibits properties equivalent to petro-based PET in laboratory conditions. "The success of this trial, albeit under laboratory conditions, is proof that polyester fiber can be industrially produced from fully renewable biomass feedstock alone. This is a significant step that would contribute to the realization of a sustainable, low-carbon society," says the company. In a separate development, in early December 2011, Gevo received U.S. Patent 8,017,358 on another aspect of its yeast technology that enables low-cost, high-yield production of bio-based isobutanol. The patent covers additional "Methods of Increasing Dihydroxy Acid Dehydratase (DHAD) Activity to Improve Production of Fuels, Chemicals, and Amino Acids." "This invention further details and protects the innovations contained in the Gevo yeast organism to turn an industrial yeast strain into a highly efficient cell factory to produce isobutanol," notes Brett Lund, executive vice president and general counsel. Verdezyne, Carlsbad, Calif., has started up a pilot plant there to make bio-based adipic acid, a key component of nylon 6,6, via a yeast fermentation process that uses non-food plant-based feedstocks (Figure 2). Because of the demand for nylon, the global adipic acid market today is said to amount to more than $6 billion/yr. "We are excited to achieve this key milestone," says Dr. E. William Radany, president and CEO. "This is the first demonstration of the production of bio-based adipic acid at scale from a non-petroleum source. Our novel yeast platform enables production of adipic acid at a lower cost than current petrochemical manufacturing processes." Verdezyne's approach reportedly offers a number of other advantages over petroleum-based methods, including less generation of carbon dioxide and other pollutants. "This plant will allow us to demonstrate the scalability of our process, validate our cost projections and generate sufficient quantities of material for commercial market development," notes Dr. Stephen Picataggio, chief scientific officer. Meanwhile, P2 Science, New Haven, Ct., a Yale University spin-off, is using patent-pending technology from the Yale Center for Green Chemistry and Green Engineering to develop and manufacture a new class of high-performance surfactants, C-glycosides (CGs). CGs can be used in a range of consumer and industrial products such as detergents, personal care products, cosmetics, lubricants, hard-surface cleaners and emulsion polymers as well as in mining and oilfield chemicals. The new surfactants are mild in use, stable, customizable and manufactured in low-energy-intensive conditions, says the firm. Carbohydrate-based surfactants have long been of interest because of their desirable performance properties and potential to be derived from renewable feedstocks. Although most carbohydrate-based surfactants utilize an O-glycoside linkage, recent advances in carbohydrate C–C bond formation have allowed for the synthesis of new classes of carbohydrate-based surfactants using a C-glycoside linkage. Seán Ottewell is Chemical Processing's Editor at Large. You can e-mail him at [email protected].
<urn:uuid:9a9c317e-54f3-4b02-b571-9705b8e80587>
CC-MAIN-2016-26
http://www.chemicalprocessing.com/articles/2012/eco-friendly-developments-blossom/?show=all
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391634.7/warc/CC-MAIN-20160624154951-00056-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931489
2,736
2.53125
3
Wall mounted televisions, garage door openers and security systems are examples of electrical appliances that might require a ceiling outlet. When installing ceiling outlets, a power source is required. Examples of power sources for a ceiling outlet are an existing light fixture, junction box or running a new circuit from the breaker box. Installing ceiling outlets is easier to complete in houses with an attic that allows access to existing wiring and junction boxes. Things you will need: A cut-in electrical box Locate an adequate power source that will supply the ceiling outlet. Light fixtures and ceiling fans located near the desired location of the ceiling outlet sometimes have a continuous power source. Turn off the applicable fixture and remove it to expose the wires. Use a voltage meter to test the wires to determine if there is a separate circuit. Check the attic area for other types of power sources such as junction boxes. Remove any cover plates and test the wires in the junction box using a voltage meter. If a light box or junction box is not available for a power source, install a new wire directly from the breaker box. Choose a suitable location for the new outlet box. Place the outlet box in an area away from existing ceiling joists or trusses for easier installation. Use a stud finder to identify existing studs or simply inspect the outlet box location from inside the attic area. Install the ceiling box that will house the new outlet and wire. Trace the outline of the new ceiling box to the ceiling in the desired location. Use a keyhole saw to cut out the ceiling box opening. Insert the applicable wire from the power source into the cut-in box. Install the cut-in box into the opening and tighten the screws that hold it into place. A cut-in box is a type of box used in remodel applications. The box works by tightening screws attached to a bracket or flaps that open and hold the box securely against existing drywall. This eliminates having to attach the ceiling box to a joist or truss. Connect the wires to a receptacle using a screwdriver. Black or common wires connect to the brass or copper screw on the receptacle. White or neutral wires connect to the silver screw and bare copper or green ground wires connect to the green grounding screw located on the bottom of the receptacle. Insert the receptacle and wire neatly in the ceiling box. Tighten the mounting screws to secure the receptacle. Install the receptacle plate to finish the project. When working in attics and electricity, have someone nearby by in case of any accidents. Use drop cloths to protect flooring from dropped tools and debris. After identifying a suitable power source, turn off the main breaker before installing additional wiring. Finding a suitable power source for the ceiling outlet box requires testing hot wires with the meter, use extreme caution when working with electricity. Without adequate knowledge and skill working with electric, hire a licensed professional. Wear a dust mask or respirator when working in attics. Use extreme caution when working in attics. It gets hot up there and sharp objects exist that can cause injury. Like My Writing? Hire Me to Write For You!
<urn:uuid:e80948bc-2509-4047-a20b-b90c073e63c1>
CC-MAIN-2016-26
http://home-renovations.yoexpert.com/home-renovations-general/where-to-begin-when-installing-ceiling-outlets-32085.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00008-ip-10-164-35-72.ec2.internal.warc.gz
en
0.858164
650
3.09375
3
Here are some interesting unit-wise options to crack the Class 12 CBSE Board physics paper. Remember, there will be no HOTS this year as well, so just 15% of questions are going to be difficult, giving you the chance to prove your smartness in understanding the concept of the subject. The remaining 85% of questions will be easy and average, testing your planned average study. So, do keep worries, stress and fear at bay. As you know your physics (theory) paper is divided in 10 units with different weightages. Though questions can be asked from any section of the syllabus, keeping time availability in mind it would be wise to concentrate on important theories, concepts, formulae and derivations. It is equally important to draw the relevant graphs and diagrams (schematic, circuit and Ray) to give the final touch-up to your preparations. So, here it goes: Unit 1: Electrostatics Revise: SI units and dimensions of electric charge, field, dipole moment, flux and charge densities, potential, capacitance and polarisation. Drawing field lines and EPS for dipole, two charge and single charge system. Vector form of Coulomb’s Law. Gauss Theorem, electric dipole, electric field lines and equipotential surface. capacitor, Van de Graff Generator Remember: Charge is scalar but the electric field created by it is a vector, whereas the potential is again a scalar. Electric flux is a scalar. A dipole experiences no force but pure torque in uniform electric field whereas it does experience a force and torque both in non-uniform field. Gauss’s Law is valid only for closed surfaces. Three types of charge densities viz linear, surface and volume are different physical quantities having different unit and dimensions. Along a field line, potential decreases at the fastest rate. The dipole moment per unit volume is called polarisation and is a vector. Whether it’s a solid or a hollow conducting sphere, all free charges reside on its surface. Dielectric constant is also called relative permittivity and is dimensionless, unitless. Unit 2:Current electricity Revise : SI units and dimensions of mobility, resistance, resistivity,conductivity, current density and emf. Ohm’s Law , drift velocity, colour coding. Parallel/ series combination of cells. Potentiometer. Numericals on finding equivalent resistance/current using Kirchoff’s laws Remember:Current is scalar as it does not follow laws of vector addition but current density is vector. Kirchoff’s junction/ loop law is charge/ energy conservation laws. If the Galvanometer and cell are interchanged in balanced Wheatstone bridge, the balance does not get affected. For a steady current along a tapering conductor, current remains constant but current density, drift speed and electric field varies inversely as area of cross-section. Ohm’s law is not universally applicable such as vacuum diode, semiconductor diode. Unit 3: Magnetic effects of current and magnetism: Revise : SI units and dimensions of permeability, relative permeability, magnetic moment, field, flux, intensity, susceptibility, torsional constant and their nature as vector or scalars. Magnetic field lines. Biot-Savart and Ampere’s law, solenoid, toroid, MCG, Cyclotron, para, dia and ferro magnetism, permanent and electromagnets. Numericals on ammeter and voltmeters Remember: Parallel currents attract and anti parallel currents repel. Ampere’s law can be derived from Biot- Savart’s law. MCG has two sensitivities voltage and current as deflection per unit voltage/ current, respectively. Angle of dip is also called inclination, its value at poles and at equator are 90 degrees and 0 degree, respectively. Superconductors are perfect diamagnets. T(tesla) is SI unit for magnetic field, the other being G(gauss,non-SI),1 T is equal to 10,000 gauss. Diamagnetism is universal - it is present in all materials. Unit 4: EMI and AC (08 marks ) Revise : SI unit and dimensions of self and mutual inductance, capacitive and inductive reactance, impedance, Q-factor, power factor. Faraday’s/ Lenz’s law, eddy current, motional emf, self/ mutual inductance, AC generator, transformer Remember: Lenz’s law is consequence of energy conservation. Eddy current has merits and demerits. AC is scalar but follows phasor treatment as it is periodically varying. At resonance power factor is 1, hence maximum power is dissipated. A transformer works in AC but not in DC. The power consumed in an AC circuit is never negative. Rated values of ac devices for current and voltages are rms whereas for power it is average. Higher the Q-factor sharper the resonance, smaller the bandwidth and better the selectivity Unit 5: Electromagnetic waves Revise : Properties and frequencies, Ampere-Maxwell law, displacement current, drawing of EMW. Numericals on finding frequency, speed etc from given equation. Remember : An oscillating charge produces EMW of the frequency of oscillation. IR waves are also called heat waves as they produce heating. The AM (amplitude modulated) band is from 530 kHz to 1710 kHz. TV waves range from 54 MHz to 890 MHz. The FM (frequency modulated) radio band extends from 88 MHz to 108 MHz. TV remote uses IR waves. LASIK and water purification uses UV rays. Unit 6: Optics Revise : Lens and Lens maker’s formula, magnifying and resolving power, limit of resolution, Hygen’s principle and polarisation, YDSE. Numericals on image location and its nature for lens-mirror combinations Remember: Resolving power is inverse of limit of resolution. Unpolarised light after passing through a polaroid gets linearly polarised with half the intensity for any orientation of the polaroid. Diffraction, interference and polarisation prove the wave nature of light. Polarisation proves the transverse nature of light. Compound microscope has eyepiece of larger aperture and objective smaller vice versa in a telescope. Reflecting telescope removes chromatic and spherical aberration fairly. If the source of light is white in YDSE the central fringe is white and others are coloured in sequence from nearest red to the farthest blue. Unit 7: Dual nature of matter and radiation Revise : Einstein’s photoelectric equation and all the graphs in the NCERT book. Davisson-Germer experiment. Numericals based on de Broglie’s and photoelectric equations. Remember : de Broglie equation relates particle to wave. Wave nature of electrons are used in electron microscope. Photoelectric effect was explained using photon picture of light. This helpline comes to you from CBSE. For queries, send an email to hteducation@ hindustantimes.com, marked 'CBSE queries The author, Krishna Deo Pandey is physics expert. Source: HT Education
<urn:uuid:6ada7445-faa1-4c88-9428-98bd2b2d0b43>
CC-MAIN-2016-26
http://www.htcampus.com/article/tips-crack-class-12-cbse-board-physics-paper-1457/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00043-ip-10-164-35-72.ec2.internal.warc.gz
en
0.890275
1,537
3.78125
4
Titus was a convert, friend and helper of Paul’s. He was Greek, the son of Gentile parents. Unlike Timothy, Titus was not circumcised (Galatians 2:3). Yet, like Timothy, Titus was sent by Paul to minister to specific churches. First, he went to the church at Corinth (2 Corinthians 7:6-16). Then, he was sent to Dalmatia (a region on the eastern coast of the Adriatic Sea), which was another difficult area (2 Timothy 4:10). Finally, he went to pastor the church on Crete. We know very little of Titus from the Bible. According to ancient tradition, Titus returned to Crete in his old age, died and was buried there at the age of 94. [See also “Who Were Timothy and Titus?” in the Overview.] To refresh your memory, read the letter to Titus. 1. Read Titus 1:1-4. Paul often introduced a letter with comments relevant to the letter’s message. Which words or ideas are included in this introduction that you may have also noticed in the whole letter to Titus? In other words, why do you think Paul is writing this letter? 2. Verse 1 speaks of the “truth that leads to godliness” (or “truth which is according to godliness”). Read the following passages and summarize what Jesus says is “truth.” · John 8:31-32— · John 14:6— · John 17:1-8— · John 17:17— 3. According to John 14:16-18 and John 16:13-14, how does the believer continue to discern truth? 4. Read Titus 1:5. For what two purposes did Paul send Titus to Crete? 5. Review the qualifications of elders in Titus 1:6-9. In verses 6 & 7, what character trait did Paul use twice? Why do you think he emphasized this point? Historical Insight: The Cretan character was proverbial in the ancient world. In Greek, to “Cretanize” meant to lie. The prophet Paul mentioned in verse 12 was Epimenides, a Cretan philosopher of the sixth century BC. Most educated men of Paul’s day had to study Epimenides. (Titus Lifechange Series Bible Study) 6. Read Titus 1:10-16 & 3:9-11. In contrast to the characteristics of an elder (given in verses 6-9), how does Paul describe the false teachers in Crete? 7. What kind of influence do false teachers have? 8. How did Paul want Titus to deal with these false teachers? Be sure to look at both passages from question 6. What is the goal of treating them in this manner? 9. Adorning Yourself: What can you do to avoid unprofitable discussions or “empty talk” and ensure healthy ones? 10. “To the pure, all things are pure” (verse 15) is a statement that could easily be abused…either to excuse sin, or to judge/condemn others. Summarize these similar instructions given by Paul: · Romans 6:15— · Romans 14:1-3; 22— · 1 Corinthians 6:12-13— · 1 Corinthians 10:23-24— 11. In light of the previous passages, what do you think Paul means by, “To the pure, all things are pure”? 12. Compare what Paul says about false teachers to what Jesus says about the Pharisees in Mark 7:5-13 and Luke 11:42-44. How are the false teachers and Pharisees alike? How are they different? 13. According to Titus 1:16, how can a person who claims to know God actually be denying God? 14. Adorning Yourself: Reflect on verse 16 this week. Do your daily actions deny or reflect a relationship with God? Ask God to show you how you can better live a life that reflects your faith. Think About It: Titus is a short epistle, but it contains such a quintessence of Christian doctrine, and is composed in such a masterly manner, that it contains all that is needful for Christian knowledge and life. (Martin Luther)
<urn:uuid:c9b2d5e6-403b-479f-a560-03b7125eb916>
CC-MAIN-2016-26
https://bible.org/print/book/export/html/20559
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783402746.23/warc/CC-MAIN-20160624155002-00183-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947232
912
2.90625
3
Lareef A. Samad B.Sc (Hons) Quahog (KO-hog) pearls are rare non-nacreous pearls produced by the saltwater clam Mercenaria mercenaria (Venus mercenaria), whose natural habitat is the Atlantic coastline of North America, from Canada's Gulf of Saint Lawrence to Florida, and extending to the Gulf of Mexico, and particularly abundant between Cape Cod in Massachusetts and New Jersey. The Quahog clams were known to ancient Algonquin Indians, who used their meat as a source of food, and their shells for ornamentation and a form of currency known as "wampam." In fact the name "quahog" is derived from the Algonquin Indian name for this clam, "Poquauhock." Apart from their use as food, ornamentation and form of currency, Quahog clams also produce extremely rare non-nacreous pearls, in a range of colors, such as white, pale lilac to purple, tan, brown and black. The occurrence of gem-quality quahog pearls have been estimated by pearl experts to be around one in 2 million, which is indeed very, very, rare. The extreme rarity of the pearl explains why not many gem-quality quahog pearls are known in the world today. The quahog pearls are so scarce that not even the international gem and pearl trade are in a position to give a proper appraisal of these rare natural beauties from Atlantic coast of America, in a world that is dominated today by bead-nucleated cultured pearls. Satellite View of North Atlantic Coastline of North America, natural habitat of the Quahog clam, Mercenaria mercenaria Part of the problem of the quahog pearl being relegated to a secondary position in the world of natural pearls today, is the unfair technical definition of pearls adhered to by gemologists, and the international who's who of the trade, including dealers and producers of cultured pearls. According to this definition "true pearls" are those pearls covered by a nacreous layer, that show the property of luster and iridescence caused by nacre, and produced by saltwater oysters and freshwater mussels. This definition automatically excludes all non-nacreous pearls produced by other Mollusks, such as clams, conches, scallops, melo-melo snails etc, even though pearls produced by some conches, quahog clams and melo-melo sea snails show a spectacular shimmering chatoyant effect, that exceeds the luster and iridescence of some low-grade nacreous pearls. The beauty of some non-nacreous pearls are not second in anyway to the most beautiful nacreous pearls. Thus, the definition of the word "pearl" as applied in the international gem and jewelry trade, has unfairly excluded some of the most beautiful specimens of quahog pearls, ever discovered in the history of mankind, and thus denied the owners of these rare beauties a reasonable value, commensurate with their beauty and rarity. Intense orange Melo Melo pearl, a non-nacreous pearl, whose shimmering flame structure surpasses the beauty and iridescent effect of some nacreous pearls Queen Mary Conch Pearl Brooch - The flame structure associated with the two conch pearls surpass the iridescence of some nacreous pearls Rare Spherical Quahog pearl with a shimmering effect surpassing the beauty of some iridescent nacreous pearls Photo courtsey -Imperial-Deltah Inc The quahog clam continues to be a staple of the Eastern seafood market, as it had been for hundreds of years before. Baked clams, steamers and chowder are favorite dishes all over the eastern states. Unfortunately, any rare quahog pearl that perhaps would have been found in the clams, were invariably destroyed by the mechanical cleaning and shucking process. In spite of this many rare quahog pearls have been discovered by unsuspecting customers, while enjoying a delicious dish of clams, at sea food restaurants they had patronized. The George and Leslie Brock Pearl, is one such rare quahog pearl discovered accidentally in a restaurant. However, most of the accidental discoveries seem to have taken place only during home processing of the clams, prior to cooking during the cleaning process, such as the Krensavage Pearl. Some of the rare quahog pearls discovered accidentally either during eating of prepared clam dishes or home processing of clams, are :- 1) Bob Anderson Pearl - discovered in 2002 2) Krensavage Pearl - discovered in 2005 3) George and Leslie Brock Pearl - discovered in 2007 4) Connor O'Neal Pearl - discovered in 2009 Bob Anderson , living at 26 Old Homestead Way, used to dig for quahog clams at Wychmere Harbor as a pastime bringing the clams home to prepare his favorite clam dish. Wychmere Harbor, is one of the three most beautiful harbors in Harwich Port, in Barnstable County, Cape Cod, Massachusetts State in the Northeastern United States, the other two harbors being Allen Harbor and Saquatucket Harbor. Harwich is a lower Cape Cod town situated at the elbow of Cape Cod. The Cape Cod Bay, and the bays and harbors south of the Cape Cod are well known for the abundance of quahog clams. That day in November 2002, turned out to be a very special day for Bob Anderson, and as usual he returned home with his find of quahog clams. Then he set about the manual cleaning and shucking of his clams, and halfway through the process, he heard something hard hit the bottom of the sink. His inquisitiveness aroused, Bob Anderson, began investigating the cause of the sound, and to his greatest shock and surprise it turned out to be a very dark and smooth, rare, purple pearl from one of the quahog clams. Bob Anderson had heard about quahog pearls and their extreme rarity, but had never dreamt that one day he would be the lucky finder of one of these rare beauties. Bob Anderson quahog pearl and the shell in which it was found Bob Anderson's quahog pearl appears to be a smooth spherical pearl in the photograph of the pearl, but its dimensions are given as 9.8 mm and 7.1 mm, which obviously refers to two of its diameters. Thus the pearl is most probably a round button shaped pearl, which is the next preferred shape after the perfectly spherical pearl. The dimension of the clam shell from which the pearl originated is 69.2 mm across. The dark-purple color of the pearl is a very rare color indeed, and this coupled with its smooth surface and luster, makes it an extremely rare find. At the time he discovered the pearl in 2002, Bob Anderson was not decided on exactly what he would do with his rare find, but was planning to retain ownership of the pearl, getting it set either on a ring or a pendant, two ideal pieces of jewelry for setting a round button shaped pearl. The Krensavage Pearl is another quahog pearl discovered accidentally by Ted and Barbara Krensavage in 2005, during the cleaning and shucking of quahog clams. The pearl was discovered in New Port, Rhode Island, one of the states of New England, in northeastern United States, renowned for the occurrence of purple quahog pearls. One of the world's best known quahog pearls, "The Pearl of Venus" which is the centerpiece of the Alan Golash Brooch, which also incorporates a second drop-shaped purple quahog pearl, is believed to have originated also in Rhode Island. Barbara Krensavage did not have a particular liking for clam dishes, except for the irresistible craving for clams casino, which she had tasted the previous night, and was the cause of her venturing out again, one Friday afternoon in early December 2005, despite the severe blizzard and falling trees that made her mission dangerous. The sole purpose of her mission was to collect more clams from a Newport seafood restaurant, in order to satisfy her craving. She returned with around four dozen quahog clams, which her husband Thaddeus "Ted" Krensavage, an anesthesiologist at Morton Hospital & Medical Center, in Taunton, set about cleaning and shucking. Halfway through the shucking he picked up a shellfish that looked like a rotten clam. Yet he decided to open it to verify its nature before discarding it. On opening the clam, Ted noticed a tiny purplish object, but it never occurred to him that it could be a pearl, for he had never heard of pearls occurring in clams, and thought that the clam was diseased. He scraped the contents on to a plate of discarded clam shells, and was ready to throw it away, when his wife Barbara walked in. Ted and Barbara's interview given to Tracy Smith of CBS/Early Show, captures the excitement of the moment. Ted said, "As I opened it, I noticed this. I didn't know what it was. A hard thing that isn't supposed to be in clams, I knew or thought anyway. And I scraped it into a plate of discarded clam shells, and was planning on throwing it out. Then Barbara came over." Barbara continuing the story where Ted had left off, said, "I said, 'Let me see that thing, It might be a pearl!' " Closer examination revealed, that the purple object was indeed a perfectly round brilliant-purple pearl, about the size of a large pea. Ted and Barbara searched the world wide web for more information on their purple pearl, and soon they realized that what they had stumbled upon was indeed a natural treasure, an extremely rare quahog pearl, with the desired characteristics such as the purple color and the perfectly round shape. They further discovered that only a handful of such pearls existed in the world today, so much so that Ted's attempts to get an exact valuation of the pearl proved extremely difficult as most gemologists and appraisers had not come across such a pearl, and had no idea about its value, as the pearl was new to the trade. Ted was non-committal about the future plans for his rare find, when he said, "If it is worth $10,000, we'll probably keep it, it'll be a family treasure. But if it's worth more than a quarter million, we might put it up for auction." A clear reflection of the uncertainty surrounding the valuation of quahog pearls, however beautiful and rare they might be. The most striking features of the pearl according to its descriptions, are the deep-purple color, the perfectly spherical shape and its brilliant luster. Pale Lilac to deep-purple colors are the most desired colors in quahog pearls. The most renowned quahog pearl in the world, the 14 mm round-button "Pearl of Venus" which is the centerpiece of the Alan Golash brooch, has a medium-deep lilac color. The deep-purple color of the Krensavage pearl also falls within the desired color range. The perfectly spherical shape of the Krensavage pearl is the most desired shape for any pearl, and in this respect it is superior to the "Pearl of Venus." However, the size and weight of the pearl is said to be only about half the size and weight of the "Pearl of Venus." The brilliant luster of the pearl, seem to be associated with its chatoyant effect known as the "flame structure," as the pearl is non-nacreous without a nacre. Ted and Barbara Krensavage got their rare find temporarily mounted on a gold ring, which they presented to their eleven-year old son Michael for getting good grades. Michael who is proud of the priceless gift given to him by his parents says that he doesn't want to sell the pearl, "When would they ever buy a million dollar gem? If you have one, just keep it. We are not selling it." Antoinette Matlins, author of "The Pearl Book : the Definitive Buying Guide" who had the opportunity of examining the Alan Gholash Brooch mounted with the "Pearl of Venus" and had called the gem "an extremely rare creation" and subsequently accompanied the pearl for the 2005 Tucson Gem and Mineral Show, where it received accolades for its rarity and beauty, predicted that the Krensavage pearl going by its description might be valued in the thousands of dollars, even before getting an opportunity to examine the pearl. But, she cautioned that the estimate could rise even further, depending on the outcome of the auction of the Alan Golash pearl brooch that was due to go under the hammer in Hong Kong, after the exhibition of the brooch in the American Museum of Natural History's traveling exhibition, "Pearls : A Natural History" was completed in March 2008. This was because quahog pearls being extremely rare had no historical precedence in pricing, and the high quality Golash quahog pearls were expected to set the benchmark for the valuation of quahog pearls in the future. However, more than an year has passed since the expected period of sale of the Golash Quahog Pearl Brooch, and there is no word yet on the sale of the brooch. It appears that the fate of the Golash Quahog Pearl Brooch is still tied down to the archaic definition of pearls adopted by the international pearl trade, that unfairly excludes exceptionally beautiful non-nacreous pearls such as the Golash quahog pearls and the Krensavage pearl. The Bob Anderson and Krensavage pearls were discovered accidentally, during the cleaning and shucking of quahog clams prior to the actual cooking process. However, the George and Leslie Brock Pearl discovered on new year's eve on December 31, 2007, was discovered accidentally by George Brock who together with his wife were enjoying a $10 plate of steamed clams at Dave's Last Resort & Raw Bar, after a day out on the beach in South Florida. The couple who were relishing their dish of middleneck clams, were almost halfway through when George Brock chomped down on something hard, and involuntarily pulled a bowl under his mouth and spat out the gritty substance. George and Leslie then peered into the bowl, and investigating the nature of the hard and gritty substance, George picked out a purplish spherical body from the bowl. He then washed it with water and to his amazement discovered that the purplish spherical body was in fact a rare purple quahog pearl, a perfect new year gift to his beloved wife Leslie. A few other customers who were also in the bar at that time, rushed to George and Leslie's table, to have a look at their accidental discovery, and were impressed by what they saw. Leslie who was overjoyed by her husband's lucky find, exclaimed, "It's like a dream. I can't believe it. I can absolutely not believe it." Their waitress couldn't believe her eyes either. She said, "I was surprised, yes. I think it's very good luck. May be it will be good luck for all of us in this new year." According to Dave's manager Tom Gerry, the restaurant obtained its supplies of clams from Apalachicola in the Panhandle of Florida, in the Gulf of Mexico. Apart from the hard clams that are harvested in the east coast, in the late 1970s three eastern states Massachusetts, New Jersey and North Carolina invested in hard clam aquaculture to meet the increase in demand for hard clams as a source of food. By the late 1980s hard clam aquaculture had spread to all the eastern coastal states from Massachusetts to Florida. Recently Florida and Virginia have emerged as the top most producers of hard clams by aquaculture. The supplies from Apalachicola may be from aquaculture sources or wild harvested hard clams. Quahog pearls of the northern waters in the region of New England, such as Rhode Island, Connecticut, Massachusetts, Maine and New Hampshire, are famous for their lilac to purple colors. These pearls have a color and luster that surpasses those of the southern warmer waters. The George and Leslie Brock Pearl lack the color and luster of the northern pearls. The color of the pearl is a pale-purple color, and the luster a medium-luster. However, the shape of the pearl is perfectly spherical, the most desired shape for any type of pearl. The size of the pearl is 6 mm, a medium sized pearl, when compared to the size of 14 mm for the "Pearl of Venus." The surface quality of the pearl is also excellent, without any blemishes, or tell tale signs of accidental biting. The pearl appears to have come out unscathed from its ordeal of steaming, followed by accidental biting. The accidental discovery of the George and Leslie Brock Quahog Pearl was widely reported in the print and electronic media and various opinions had been expressed by jewelers and pearl experts on the value of the rare quahog pearl, some optimistic and some pessimistic. According to news reports George and Leslie Brock took their gem to a jewelry store across the street. The owner of the store seemed awestruck when he saw the rare purple quahog pearl. According to him he had not seen such a perfect quahog purple pearl in decades, which is not surprising given the probability of occurrence of a gem quality quahog pearl being only one in 2 million. However, it is not known whether this particular jeweler had ventured to express an opinion on the value of the pearl. Vermont Gemologist Antoinette Matlins, author of "The Pearl Book : The Definitive Buying Guide" gave a rather optimistic opinion on the value of the pearl. Commenting on the Brock's quahog pearl she said, "few are round, and few are a lovely color, so this is rare. I think they have found something precious and lovely and valuable." She further said, "The value of the Brock's pearl rests largely in an exhibit with the American Museum of Natural History - a quahog pearl brooch. The eventual auction of this quahog pearl brooch will set the tempo for pricing purple pearls in the international market." Antoinette Matlins was obviously referring to the Alan Golash pearl brooch, that incorporated as its centerpiece "The Pearl of Venus" believed to be the largest and finest quahog pearl in existence. Due to their extreme rarity historical precedence in pricing quahog pearls is virtually non-existent. Hence the need for the outcome of the auction of the Alan Golash pearl brooch. The Brocks then took their pearl to another jeweler who confirmed the authenticity of the pearl and estimated its value to be around $25,000. Leslie Brock appears to have obtained further valuations for the pearl, as she told the West Palm Beach News, Florida, "A 4-millimeter, which we initially thought it was, could possibly bring in $25,000 to $45,000. But it turned out to be a 6-millimeter." In spite of the optimistic opinions expressed about the pearl, the Brocks themselves seemed to be uncertain about its true value, when they told the West Palm Beach News, Florida, that if the pearl turns out to be worth a ton of money, they will sell it and invest in real estate. Otherwise, they plan to put it in a pendant and give it to their granddaughter. The most pessimistic estimate of the Brock's pearl came from a jeweler while answering the question, How much does an iridescent purple pearl sell for? in Yahoo! Answers. He said, "Just recently a couple found a quahog in a plate of clams at a restaurant in Florida. Their jeweler told them it was worth about $25,000. I buy and sell these pearls as a profession, and from the description and picture in the story I would estimate it to be worth less than $500.The value of quahog pearl depends on several factors. The size, the shape, the color (how deep the purple is or whether it is actually lavender), the surface quality all play a large part. A quahog can sell for $100 per carat, or even more than $1000 per carat based on these attributes. According to this professional jeweler there may be only around 20 jewelers in the United States who would be able to give an adequate valuation of quahog pearls. The wide media publicity given to the Brocks accidental discovery, had a beneficial effect on the sales of the restaurant where the discovery occurred. According to the restaurant's general manager, Michael McClelland, after hearing about the Brock's find more diners ordered for clams, and almost everyday during that period, all clam dishes were sold out. The restaurant also took out an ad in a local newspaper, to encourage others to come in and try their luck, a strategy that worked until the excitement surrounding the accidental discovery died down. The Connor O'Neal Pearl is the latest quahog pearl to be discovered accidentally by biting on August 24, 2009, but the discoverer this time was a young 7-year old boy, Connor O'Neal, who was to enter grade I in the fall of 2009 at the Primrose Hill School, in Rhode Island. Clams were young Connor's favorite food, and his mother Mary K. Talbot, bought a batch of littlenecks at a Barrington supermarket to satisfy the boy's taste. Mary prepared "linguini" a clam dish the boy relished. As the boy was enjoying his dish of clams, he bit something hard and spit it out, and thought it was perhaps a pearl. His suspicions were proved correct, when on investigation it turned out that what he actually spat out was a small spherical purple quahog pearl. When asked how was he so sure that it was a pearl, young Connor replied confidently, that he studied a lot of marine biology, which was confirmed by his mother, who said that the child had an encyclopedic knowledge of the oceans. When asked if he planned on doing any shell fishing to seek out other pearls, he said, "No, I don't know if I like looking for pearls. I like eating clams. They are my favorite food." The quahog clam (Mercenaria mercenaria) is marketed commercially for human consumption under three main categories :- Littlenecks, Cherrystones or Middlenecks and Chowders, based on the size of their shells. Clams between 2.0 and 2.9 inches (5.0 - 7.4 cm) are known as "Little necks." Those between 3.0 and 4.0 inches (7.6 - 10.0 cm) are known as "Cherrystones," and those above 4.0 inches (10.0 cm) are known as "Chowders." The more valuable categories are the "Littlenecks" and the "Cherrystones" perhaps because they are more palatable than the "Chowders." The "Connor O'Neal" quahog pearl was discovered from a "Littleneck" whose average size is between 2.0 and 2.9 inches, when the clam is approximately about 4 years old. Most of the bigger quahog pearls, such as the "Pearl of Venus" measuring 14 mm in diameter, actually originated in the "Chowders" which are greater than 4 inches in size and more than 8 years old. Thus the "Connor O'Neal Pearl" discovered from an approximately 4-year old "Littleneck" is most probably a tiny pearl less than 5 mm in diameter. In fact photographs of the pearl show that the pearl may be between 2-4 mm in diameter. The pearl appears to be perfectly spherical and the color of the pearl a deep-purple color. The combination of spherical shape and deep-purple color, two of the most desired characteristics in quahog pearls, makes this pearl a rare find, despite its smaller size. Professor Michael A. Rice, a professor of fisheries and aquaculture at the University of Rhode Island said, Pearls in quahogs are common enough. They probably will show up in 1 in 500 or something like that. Generally the pearls are whitish. But what is very, very rare is a purple pearl from a quahog. The purple color is a genetic trait. It has to do with certain proteins that are laid down in the shell. There are areas in Narragansett Bay where the purple is very prominent. He further said, "The mother-of-pearl lining in quahogs is not as lustrous as a standard pearl oyster. Their consistency as gemstones is not as brilliant or shiny as an oyster pearl from Asia or the South Pacific. With a well-formed quahog pearl, there have been cases where pearls have been worth several hundred to a couple of thousand dollars, based on rarity, color, all of those sorts of things that go into the aesthetics of the pearl. Thus according to Professor Rice, quahog pearls are quite common (1 in 500), but the purple quahog pearl is very, very rare. In fact the occurrence of purple quahog pearls is about 1 in 100,000, out of which only 1 in 20 are gem-quality. Thus the probability of occurrence of a gem-quality purple quahog pearl is 1 in 2 million. Rhode Island is famous for its purple quahog pearls, and in certain areas such as the Narragansett Bay their occurrence is very prominent. In fact the largest and finest purple quahog pearl in existence, the "Pearl of Venus" is believed to have originated in Rhode Island. He further states that quahog pearls are actually non-nacreous pearls, lacking the luster and brilliance of oyster pearls. For a well- formed quahog pearl he gives an estimate of several hundred to a couple of thousand dollars, based on rarity, color and other factors. According to Dr. Dale Leavitt, associate professor of marine biology at Roger Williams University, Bristol, Rhode Island, "All bivalves - clams, oysters and mussels among others - can generate pearls. If the bivalve gets a bit of sand or other grit inside its shell, it relieves the irritation by covering the intruder with nacre, smooth mother-of-pearl. The purple pearls get their coloration from the usually white mother-of-pearl interior lining of the shells. Blue shells are so rare that the Indian inhabitants of the Northeast shoreline used them as currency, calling them wampum. And so the shell lining is known as the wampum part. Purple quahog pearls are very pleasant to look at." In referring to the value of the quahog pearl, Dr. Leavitt further recalled, "some years ago someone found a purple quahog pearl in a jewelry consignment in Bristol, Rhode Islannd, but it didn't prove to be a hot item. Someone told them it was worth hundreds of thousands of dollars, and they tried to sell it on eBay, but there was not much action. With respect to the quahog pearl found by Connor O'Neal, he said, "I would be very surprised if it had very much value." Mary K. Talbot, mother of Connor, said that the family would check out the value of the pearl, and would sell it, if it's worth anything, and deposit the proceeds in Connor's college fund. On June 1, 2009, a rare purple quahog pearl, 10 mm in diameter and 5.5 carats in weight, was put up for auction, at a Bonhams sale held in New York. However, the pearl was not offered for sale as a separate lot, but as part of a diverse collection of natural pearls with corresponding shells. The collection assigned Lot No. 1429, consisted of 25 different pearls and their shells. The 5.5 carat, 10 mm purple quahog pearl was item no. 3 on the list. The pearls were not valued individually, but the pre-sale estimate of the entire collection was put at between $25,000 to $30,000. Dividing by 25, the average pre-sale estimate of each item on the list is between $1,000 and $1,200. This value gives a fairly accurate estimate of the Quahog pearl. In the auction catalogue Lot No. 1429 was described as follows :- "An astonishingly beautiful group formed over a period of several decades, including many rare and unavailable species, with a wide selection of both nacreous and non-nacreous natural pearls found in oysters, snails and various other mollusks from the Atlantic, the Caribbean, the Sea of Cortez and the Pacific Ocean." Unfortunately, there were no bidders for the lot, and it appears that the items were withdrawn from the sale. Both quahog pearls and conch pearls are non-nacreous pearls. Quahog pearls, particularly the lilac to purple variety, does not seem to have created much of an impression in the international pearl markets, despite its beauty and rarity. This is in sharp contrast to Conch pearls, another non-nacreous pearl, that has staged a come back, after its earlier popularity in the late 19th and early 20th centuries, when they were incorporated in Art Nouveau and Edwardian Jewelry of the Belle Epoque period (1901-1915). Since then the popularity of Conch pearls waned, and the pearls were almost completely forgotten, particularly after the successful production of cultured Japanese Akoya pearls, in the 1920s, that wiped out the natural pearl industry in many parts of the world. However, queen conches that produced conch pearls, continued to be harvested in the Caribbean and the Gulf of Mexico, not for their conch pearls, but for their meat, which became a popular delicacy in this region, to the extent that the queen conch became an endangered, species. The continuous harvesting of queen conches, ensured a steady supply of conch pearls as a by product of the queen conch meat industry, but there were no takers for these pearls, except for the pearl enthusiasts and the collectors. One such collector of conch pearls, was Susan Hendrickson, the marine archaeologist, paleontologist, and professional diver, who is credited with the discovery of the largest, most complete and best preserved fossil skeleton of Tyrannosaurus rex in 1990, in South Dakota. Susan Hendrickson built up one of the largest collections of conch pearls in the world, during her diving expeditions in the Caribbean. The conch pearls like the purple quahog pearls are not extremely rare, the frequency of occurrence being about 1 in 10,000 queen conch snails. Out of this only in 1 in 10 are gem-quality. Thus the probability of occurrence of conch pearls is 1/10,000 x 1/10 = 1/100.000, whereas the probability of occurrence of purple quahog pearls is 1 in 2,000,000. Despite the fact that conch pearls are more common than purple quahog pearls, conch pearls have staged a recovery in the international pearl markets. This was partly due to the efforts of a single individual, Susan Hendrickson, who had gone into partnership with Georges Ruiz, the renowned Geneva-based jewelry manufacturer, to produce conch pearl jewelry and popularize their usage. 1) A worldwide increase in demand for natural pearls in a market dominated by cultured pearls for over eight decades. 2) The availability of conch pearls in a wide variety of colors, such as pink, white, yellow, brown and golden, and the most sought after color salmon-colored orange-pink. 3) The presence of the unique "flame structure" a type of "chatoyancy" particularly in the pinkish and whitish tones of conch pearls, that adds to their value. 4) The hardness of conch pearls, which is greater than most other pearls. Purple quahog pearls had also been used in jewelry since Victorian times, as evidenced by the discovery of the Golash Pear Brooch in the year 2000, believed to be of mid-Victorian origin. Thus the popularity of the quahog pearls goes back further in history than conch pearls. Thus it seems to be surprising why purple quahog pearls have not made a significant impact in the international pearl markets, despite the fact that the demand for natural pearls is now on the increase, as seen by the strong auction market for pieces containing natural pearls. Please refer to table of famous natural pearls/pearl jewelry sold at public auctions, and the prices realized, given in the following web page :- Anna Thomson Dodge/Catherine the Great Pearl Necklace. 1) The extreme rarity of the pearl - 1 in 2,000,000 - that has made the pearl relatively unknown in the international pearl markets. Usually rarity coupled with awareness increases the value of an article. Rarity alone without awareness of the potentialities, may not increase the value of an article. 2) The bias created against purple quahog pearls by an archaic definition of pearls, that gives them disparaging names such as, non-nacreous, calcareous concretions, pseudo pearls etc, whereas in fact some of the quahog pearls, have a beauty and luster that surpass most of the low-grade nacreous pearls. The iridescence of nacreous pearls which the quahog pearls lack is supplanted by the "flame structure," a type of chatoyancy caused by microfibrils of aragonite and calcite. 3) Old prejudices die hard. The prejudice created against the quahog pearls, may not be eliminated with the best of intentions, as some renowned gemologists like Antoinette Matlins had been trying to do by educating the people about the merits of the quahog pearl, and giving the pearl its rightful place among the family of pearls. Little wonder then that the largest and finest quahog pearl in existence, the "Pearl of Venus" incorporated in the Alan Golash pearl brooch had no takers, despite its mid-Victorian provenance, when attempts were made to sell it on eBay. 4) The refusal of human nature to reconcile and accept the possibility that something worth a fortune could materialize in commonplace food items, like littlenecks, cherrystones and chowders, which has almost become a staple food of the people of the eastern states of the United States. Even though quahog pearls are non-nacreous like conch pearls, they need a better deal and recognition from pearl enthusiasts, dealers and the international pearl trade. Some of the cogent reasons why they require an immediate reappraisal can be enumerated as follows :- 1) Beauty - The overall beauty of quahog pearls, especially the lilac to purple shades of color, that are the rarest and most desirable colors in quahog pearls. Dr. Dale Leavitt, associate professor of marine biology at Roger Williams University, Bristol, Rhode Island, says that, purple quahog pearls are very pleasant to look at. This is an appropriate description of its beauty by an expert in the field. 2) Satiny Glow - Some quahog pearls have a medium luster, resembling the sheen observed on the surface of fine porcelain, that produces a rare satiny glow. 3) Flame structure - Some quahog pearls have a shimmering effect, a flame structure similar to the conch pearls, caused by microfibrils of calcite and aragonite, a type of chatoyancy. In some quahog pearls the chatoyancy is expressed as a distinct "eye" as seen on the "Pearl of Venus." 4) Extreme rarity - Quahog pearls are extremely rare. The occurrence of a gem-quality purple quahog pearl is only 1 in 2,000,000. Normally, rarity is associated with higher prices, as in the case of diamonds. Quahog pearls cannot be an exception. 5) Durability - Quahog pearls are also durable. The quahog pearls mounted on the Alan Golash pearl brooch, which is believed to be of mid-Victorian origin, are approximately 150 years old. These pearls have retained their original beauty despite their old age, and hopefully will continue to maintain its original beauty for many more years to come, if stored under proper conditions, without exposure to excessive heat and ultraviolet radiation. 6) Natural Pearls - All quahog pearls are natural pearls, taking as much as 4 to 8 years or more to develop. Most of the quahog pearls are discovered from "chowders" which are generally more than 8 years old. In the light of the above indisputable facts, quahog pearls need an immediate reappraisal of their value, by the international gem trade, so that they are given their rightful place among the family of pearls. Reappraisal of gemstones is a common practice in gemology and the gem trade, as more facts about a gemstone emerge. Tourmalines, first discovered and appreciated as a gemstone in Sri Lanka, thousands of years ago, was considered a cheap gemstone and classified as a semi-precious stone previously, but has now been almost elevated to the status of precious stone, commanding premium prices. Such a reappraisal for quahog pearls, would give a much-needed boost for the pearl hunters and prospectors, and may result in an increase in production of these rare beauties, and stimulate scientific research to culture quahog pearls artificially, and thus increase their availability. As an initial step some of the State Governments in the northeastern United States could step in and purchase some of these extremely rare beauties from their owners, paying an enhanced price commensurate with their beauty and rarity, to be added to their own natural history museum collections. This would go a long way in boosting the prices of these rare pearls, and helping them to achieve a price level in keeping with their rarity. You are welcome to discuss this post/related topics with Dr Shihaan and other experts from around the world in our FORUMS (forums.internetstones.com) 1) Harwich Port, Massachusetts - From Wikipedia, the free encyclopedia 2) Website of the Office of the Harwich Harbormaster - www.threeharbors.com 3) Cape Cod - From Wikipedia, the free encyclopedia 4) Purple Pearl Was Almost Tossed - Rhode Island Man Didn't Realize What He Had : A 1-In-2-Million Find - By Brian Dakss. www.cbsnews.com 5) Clam craving led to find of rare quahog pearl - www.pearls.com 6) Couple Finds Rare Pearl in Plate of Steamed Clams - Sky News, Tuesday, January 01, 2008. www.foxnews.com 7) How much does an iridescent purple pearl sell for? - Yahoo! Answers 8) Diners Find Rare Purple Pearl In Plate of Clams - Most Popular News Stories. www.wpbf.com 9) Lucky boy finds a pearl in his linguini - By Thomas J. Morgan. Rhode Island News - Saturday, August 29, 2009. 10) Bonhams Sale 17502 - Natural History - June 1, 2009. Auction Catalogue. Lot No. 1429. Diverse Collection of Natural Pearls with Corresponding Shells. www.bonhams.com 11) The Quahog Pearl - Pearl Jam - Pearl Perspectives -by Imperial. www.pearls.com 24HRS TEL: +1-845-618-1211 24 hrs FAX: +1-845-678-1867 Dr Shihaan Larif Register in our Forums
<urn:uuid:30f0aa7b-22cb-40da-8dc1-e44ea8b924aa>
CC-MAIN-2016-26
http://www.internetstones.com/rare-quahog-pearls-accidentally-krensavage-george-leslie-brock-bob-anderson-conor-o-neal.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397696.49/warc/CC-MAIN-20160624154957-00147-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966362
8,497
3.34375
3
SHALE RIPPLES SPREAD Shale gas is widely expected to remain an exclusively North American phenomenon until the second half of this decade, when liquefied natural gas exports from the U.S. may finally start reaching foreign shores. However, the ripples of the boom have, albeit indirectly, already spread across the Atlantic. The chain of unintended consequences ends in Slovakia, where German utility E.ON said Wednesday it is seriously considering shutting down a modern 430-megawatt gas-fired power plant because it is unprofitable, as The Wall Street Journal’s Jan Hromadko reports. Norwegian utility Statkraft also said it would mothball a gas plant it operates in Germany, saying it was struggling to compete against cheaper electricity from coal-fired power plants. This isn’t only happening in Central Europe. U.K. utility SSE said in March that it would reduce capacity at a pair of gas-fired power plants and wouldn’t invest in any new ones until 2015 at the earliest. A week later, the U.K. government revealed that coal-fired power surged 31% in 2012. Gas-fired power plants are struggling in Europe because coal there is cheap, largely because of a surge of imports from the U.S., which rose 23% last year, according to the Energy Information Administration. The U.S. is exporting more coal because it is being displaced from domestic power generation by cheap shale gas. It’s a neat solution to a tricky infrastructure problem. The U.S. currently has no way to export natural gas in large volumes, so coal has become a useful way for North America to find an overseas market for its shale-generated energy surplus. The direct solution to that infrastructure problem—LNG exports—continues to gather pace. Cheniere Energy says construction is ahead of schedule on the first two units of its Sabine Pass LNG export facility. U.K.-based BG Group added another potential North American export outlet Tuesday as it filed plans for an LNG plant in British Columbia, reports the Vancouver Sun. Some companies are even looking at small-scale LNG production on barges off the coast of Texas, Louisiana or Maryland, reports FuelFix. BRAZIL TALKS BIG The shale boom has cast a rather long shadow over the country that a few years ago was expected to be the rising star of oil and gas in the Americas—Brazil. The chief executive of Brazilian state-controlled oil giant Petrobras, Maria das Gracas Silva Foster, has been taking the opportunity to remind the industry at the Offshore Technology Conference in Houston that her company still has big things in its plans. Petrobras’s oil reserves will double in size by 2020 and by the end of the decade it will have plenty of crude available to export, reports the Journal’s Alison Sider. Production from the famous “pre-salt” offshore oil fields—which are trapped under thick layers of salt—will surge from around 310,000 barrels a day today to 1 million barrels a day by 2017, and 2 million barrels a day by 2020, reports FuelFix. GOOD NEWS FOR OPEC Despite its many troubles, the Organization of the Petroleum Exporting Countries has proved its resilience over the years. Its members’ ability to bounce back from adversity was demonstrated Tuesday as Algerian state oil company Sonatrach and U.S. partner Anadarko Petroleum started first oil production from the $4.5 billion El Merk oil complex, reports the Journal’s Benoit Faucon. Algeria’s oil and gas industry is still trying to recover from the deadly terrorist attack at the In Amenas gas plant in January. Anadarko said it had increased security in response to the attack, but had no intention to leave Algeria. OPEC also took a small step toward resolving its internal divisions Tuesday, as it agreed to replace its outgoing head of research with an official from Saudi state oil company Aramco, the Journal reports. The role is important because it oversees OPEC’s monthly oil market report, which spells out the group’s views on oil supply and demand and influences its output decisions. OPEC’s far more difficult task of choosing a new secretary-general, which like the head of research job has competing candidates from rivals Iran and Saudi Arabia, has yet to be resolved. Finally, dispelling any doubt that some OPEC members are still doing extremely well out of the oil business, Zawya reports that Saudi Arabia’s financial reserves were the third largest in the world, after China and Japan, at the end of last year. Brent was down slightly in London trading Wednesday, as the benchmark crude continued to experience headwinds, including low demand, and U.S. stocks are at historic highs.
<urn:uuid:86a1e556-1c2e-4d34-872f-f2cf88baa7e4>
CC-MAIN-2016-26
http://blogs.wsj.com/moneybeat/2013/05/08/energy-journal-us-gas-boom-reverberates-in-europe/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00127-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957352
1,012
2.578125
3
U.S. Students Get Top Scores for Sleepiness While U.S. students often catch flak for their performance on large-scale international assessments, they may be approaching world dominance on one such indicator: sleepiness. In both the Trends in International Mathematics and Science Study and the Progress in International Reading Literacy Study, the percentage of U.S. pupils enrolled in classrooms in which teachers report that student sleepiness limits instruction "some" or "a lot" in 4th grade reading and 4th and 8th grade math and science has consistently exceeded 70 percent. Internationally, overall averages for sleepiness range from 46 percent to 58 percent, depending on the grade level and the subject. (Eighth grade science classes were the "sleepiest.") What does this all mean? It is difficult to say. In 2011, the journal Sleep Medicine published a meta-analysis of 41 studies that found that, at least in adolescence, students in Asian nations went to bed latest on school nights, resulting in the world's highest rates of daytime sleepiness. But a quick glance at the TIMSS and PIRLS charts suggests that the United States generally has higher percentages of students enrolled in classes in which teachers reported that sleepiness limited instruction. Although some Asian nations and jurisdictions reported relatively high rates in certain subjects or grade levels, others (especially Japan) are generally below the international average. By contrast, U.S. rates range from 73 percent in science and 4th grade math to 85 percent in 8th grade science. Countries and jurisdictions with similar rates in at least some grade levels or subjects included Australia, Taiwan, Finland, France, Hong Kong, New Zealand, Saudi Arabia, and Turkey. However, because of the way the data were collected, TIMSS and PIRLS could not say where, precisely, the United States ranked in the world. "What we can say is that greater percentages of students in the United States, in comparison to other countries, have teachers that report their instruction is limited due to students' lack of sleep," said Chad Minnich, a spokesman for the TIMSS & PIRLS International Study Center at Boston College. "Further, our data show that when instruction is limited due to students' lack of sleep, that achievement in mathematics, science, and reading is lower." However, Iris C. Rotberg, a research professor of education policy at George Washington University in Washington, says that valid conclusions about students' sleepiness cannot be drawn from teachers' responses to a questionnaire item asking to what extent their instruction was limited by students suffering from a lack of sleep. "Further, because of the basic sampling and measurement flaws in international test-score comparisons generally, the factors contributing to test-score rankings cannot be accurately identified," Ms. Rotberg said. But in a commentary published last month in the journal Teachers College Record, Meilan Zhang, an assistant professor of educational technology at the University of Texas at El Paso, argued that "[r]esearchers, policymakers, teachers, health-care practitioners, parents, and students" should take notice of the TIMSS and PIRLS findings. "Improving student sleep deserves more attention than is currently received in public discourse and national agendas for education," she wrote. "It is likely that when the sleepiness rankings of U.S. students go down, their science, mathematics, and reading score rankings will move up in the next TIMSS and PIRLS." Ms. Zhang's theory is that U.S. students are sleepy in school because they spend too much time texting, playing video games, watching TV, and using media in other ways. "Heavy media use interferes with sleep by reducing sleep duration, making it harder to fall asleep, and lowering sleep quality," she wrote, citing a 2011 research review in the journal, Sleep Medicine. But the relationship between youth media use and sleep is not so simple, said Michael Gradisar, who coauthored both that review and the Sleep Medicine meta-analysis. "Technology use is the new culprit when trying to answer 'Why are school-age children sleeping less?'" said Mr. Gradisar, an associate professor of psychology at Flinders University in Adelaide, Australia. There may be safe limits to technology use, Mr. Gradisar stated. For instance, recent research results indicate that using a bright screen for an hour before bed or even playing violent video games for less than that will not necessarily interfere with teenagers' sleep, he wrote. But longer periods of usage can be harmful to sleep, Mr. Gradisar added. Rather than delay school start times, he said, a first step should be educating parents about limiting the hours their children are using technology before bed, and enforcing a consistent bedtime. Early school start times are also commonly blamed for student sleepiness, especially for adolescents. Secondary schools around the nation and the world have been delaying start times, often with positive results. Mr. Minnich of the TIMSS and PIRLS center hesitated to "attribute causality or apportion blame to any particular factor." But he did speculate that cost-saving measures to consolidate bus routes might help explain U.S. students' sleepiness. "For those children who board the bus first, they must get up earlier, may end up dozing en route to school, and may end up arriving at school sleepy," he said. Vol. 33, Issue 35, Pages 1,20
<urn:uuid:1ddc9c7a-d961-4d8f-a89d-eea4accd6f11>
CC-MAIN-2016-26
http://www.edweek.org/ew/articles/2014/06/11/35sleepy_ep.h33.html?cmp=RSS-FEED
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00189-ip-10-164-35-72.ec2.internal.warc.gz
en
0.953834
1,116
2.75
3
Infuriated at attempts since the Black Death to revive old villein services, the peasants of Essex and Kent marched on London in 1381. Choosing Wat Tyler as their leader, they advanced on 12 June to Mile End, near London, where John Ball preached a fiery sermon. On 14 June, youthful King Richard II and his counsellors arrived for a conference with them. Tyler was murdered, and the revolters dispersed. While it is notable as the first great revolt of labour against capital, the revolt of 1381 led to no startling changes.
<urn:uuid:0e76f44e-7178-4526-a092-8b411e120b3f>
CC-MAIN-2016-26
http://www.humanitiesweb.org/spa/htd/ID/12
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.975021
113
3.6875
4
- freely available Materials 2014, 7(4), 3160-3175; doi:10.3390/ma7043160 Abstract: Energy-dispersive X-ray microanalysis (EDX) is a technique for determining the distribution of elements in various materials. Here, we report a protocol for high-spatial-resolution X-ray elemental imaging and quantification in plant tissues at subcellular levels with a scanning transmission electron microscope (STEM). Calibration standards were established by producing agar blocks loaded with increasing KCl or NaCl concentrations. TEM-EDX images showed that the salts were evenly distributed in the agar matrix, but tended to aggregate at high concentrations. The mean intensities of K+, Cl−, and Na+ derived from elemental images were linearly correlated to the concentrations of these elements in the agar, over the entire concentration range tested (R > 0.916). We applied this method to plant root tissues. X-ray images were acquired at an actual resolution of 50 nm × 50 nm to 100 nm × 100 nm. We found that cell walls exhibited higher elemental concentrations than vacuoles. Plants exposed to salt stress showed dramatic accumulation of Na+ and Cl− in the transport tissues, and reached levels similar to those applied in the external solution (300 mM). The advantage of TEM-EDX mapping was the high-spatial-resolution achieved for imaging elemental distributions in a particular area with simultaneous quantitative analyses of multiple target elements. Energy-dispersive X-ray microanalysis (EDX) is a technique for analyzing elements at the microscopic level. For this purpose, scanning (SEM) or transmission electron microscopes (TEM) are equipped with an energy dispersive system for quantitative electron probe X-ray microanalysis. The TEM-EDX system requires embedded samples, which enable high spatial resolution. The SEM-EDX system can be applied to surfaces of untreated specimens and, thus, provides a rapid way of measuring elemental distributions in plant and animal materials. The most significant recent advance has been the development of cryo-SEM for in situ elemental quantification by EDX [1,2]. This method assumes that little or no element redistribution can occur in a frozen-hydrated specimen [1,2]. Thus, SEM-EDX allows direct analysis of frozen-hydrated materials without freeze-drying or embedding. However, in general, the low spatial resolution of SEM-EDX makes it difficult to determine structures from the rough surfaces of frozen-hydrated bulk specimens. The cellular and subcellular distributions of elements in biological materials are typically investigated at relatively high spatial resolution with TEM-EDX [3–5]. This technique requires preparation of thin sections of resin-embedded plant materials [6–8]. For element analyses, TEM-EDX protocols have been developed that avoid ion re-distribution during the embedding procedure . Thus, TEM-EDX can be used to examine elements of interest within cell compartments with high spatial resolution. The advent of imaging techniques has advanced the analysis of elemental distributions and the quantification of elements in cells and tissues . Elemental imaging also provides improved spatial information for the analysis of biological materials. Moreover, the mean values of elemental concentrations derived from an X-ray image represent several hundreds to thousands of probe measurements, which provide more reliable information than dozens of randomly selected measurement points. Previous studies have described results from frozen-hydrated biological materials analyzed with X-ray imaging in a SEM system [9,10]. However, quantitative elemental imaging in a TEM system has seldom been attempted in biological materials. For quantification of X-ray images, it is necessary to obtain standards that contain the element of interest in known amounts. It was previously established that calibration standards for quantitative X-ray microanalysis in a TEM could be produced by adding 6–600 mM KCl to 5% agar . The agar-KCl blocks proved to be highly suitable for the quantification of X-ray microanalytical measurements . In the present study, we prepared calibration standards by adding 0–320 mM KCl or NaCl to an agar matrix. The samples were processed in the same way as the plant tissues; i.e., they were rapidly frozen, freeze-dried, embedded in plastic, and sectioned at 1.0-μm thickness [6–8]. Elemental images of agar standards were quantified and the data were used to generate calibration curves for assessing element concentrations in plant cell compartments. EDX analyses are particularly suited to investigations of stresses imposed by toxic elements or excess salinity . Here, we employed TEM-EDX to study salt distributions in Populus euphratica Oliver, a salt-tolerant woody species. P. euphratica is used as a model plant to address tree-specific mechanisms underlying salt tolerance [11–20]. Previous X-ray microanalysis with random point measurements revealed that, compared to salt-sensitive species, P. euphratica roots accumulated more Na+ in cortical cell walls, but significantly less Na+ in stelar walls [21,22]. Furthermore, vacuolar compartmentalization of Na+ and Cl− could be demonstrated in root cortical cells , but the concentrations were apparently lower in vacuoles than in the cell walls [21,22]. However, those results were somewhat difficult to interpret, because the images compared were not acquired under the same measuring conditions. In these studies, to obtain representative data for the analyzed probe, the electron beam was adjusted to the size of the investigated structure [21,22]. Thus, probe measurements of cell walls, cytoplasm, and xylem vessels were acquired with a narrow electron beam, and measurements of vacuoles were acquired with a broad electron beam that covered the vacuolar lumen [21–25]. Therefore, the data had to be corrected for the different measuring intensities of the applied electron beam. In the present study, an electron beam of uniform width and intensity was used for quantitative X-ray elemental imaging of root cells of P. euphratica. With the use of standards, we estimated the ion concentrations within biological materials, including the root cortical cells and xylem vessels. The use of cryo-EDX to investigate frozen hydrated samples directly with SEM avoids potential drying artifacts that may occur during freeze drying. However, it precludes structure determinations in the elemental images, due to its relatively low resolution. In our study, agar standard and plant samples were analyzed at a high-spatial resolution ranging from 50 nm × 50 nm to 100 nm × 100 nm. High-spatial resolution elemental images of these tissues showed that the ion gradient varied between different subcellular compartments. High-spatial resolution of intracellular ion concentrations is a major advantage of TEM-EDX compared to the lower resolution elemental images obtained with cryo-analytical SEM. Therefore, the proposed TEM-EDX protocol is a feasible method for estimating multiple elemental concentrations within cell compartments. 2. Results and Discussion 2.1. Elemental Images of Agar Standards STEM images of specimens were acquired prior to X-ray imaging. In the STEM image, a frame adjacent to the measured region was used for drift correction (Figure 1A,B); this is required, because micrographs at high magnification are distorted by motion of the sample during the scanning and image acquisition. For the purpose of this study, agar specimens were analyzed in an area of 1.75 μm × 1.75 μm, at a spatial resolution of 50 nm × 50 nm (35 × 35 data points), with a dwell time of 10 s per point. X-rays were detected with the EDX detector at 80 kV with a low current (beam size set to 8 in the Tecnai TEM, FEI, Hillsboro, OR, USA). The beam current was constant during the collection of elemental maps, and each collection period lasted at least 4 h. Drift corrections were performed automatically every 400 live seconds during the period of X-ray imaging. Kα line peak area intensities (counts) of the measuring regions were analyzed with TEM imaging and analysis (TIA) software (FEI, Hillsboro, OR, USA). The obtained elemental images were evaluated with a rainbow color scale, where X-ray intensity increases from pink (low intensity) to blue, green, yellow, red, and black (high intensity). To visualize changes at low and high concentration ranges, we used log-scaling. The maximum, mean, and minimum intensities of K+ and Cl− from the measured regions were extracted with TIA software. The maximum and minimum values must be fixed for comparing concentrations between different elemental images. In this study, the maximum value was set at 1000, due to the high pixel intensity measured at 320 mM KCl agar. To set the minimum limit, we used the mean value of minimum intensities across the elemental images. In the absence of KCl, STEM images of agar blocks showed a uniform, resin-embedded agar matrix (Figure 1A). In the presence of KCl, electron-dense precipitates became visible in the agar block (Figure 1B). X-ray imaging indicated that these aggregates were formed from KCl (Figure 1C,D). Thus, the sublimation of water apparently resulted in KCl aggregation and the formation of crystals. During sample preparation, we took rigorous precautions to minimize ion migration. We used diethyl ether, the preferred substitution solvent, in the vacuum infiltration process. Prior to the infiltration step, a molecular sieve was used to absorb any water in the diethyl ether. Moreover, the relative humidity was maintained at about 10%–20% to exclude atmospheric moisture during the embedding and sectioning processes. All sectioned specimens were coated with carbon under vacuum, and subsequently, stored in a dry box until analysis. Our results showed that the high intensity points were evenly distributed in the agar matrix (Figure 1B); thus, large-scale migration of KCl did not occur during the freezing, freeze-drying, and embedding processes. Notably, we observed perfect overlap of the K+ and Cl− images (Figure 1D). As shown in Figure 1D, the pixel intensities of potassium and chloride varied throughout the X-ray images. Therefore, taking the minimum intensity as the background, we calculated the mean values corrected for the minimum intensity (i.e., element peak = mean intensity − minimum intensity) for each elemental image (35 × 35 data points) to establish the calibration curve. When the measured K+ and Cl− intensities were plotted against the added KCl concentrations, a linear correlation was observed between the mean intensity and the content of these elements in the agar matrix (Figure 2). We noticed that the fitted regression lines for K+ and Cl− failed to go through the origin (Figure 2). This would be expected for agar that contained insoluble K+ and Cl− prior to the addition of KCl. However, in agar blocks without KCl, we did not find evident peaks of either K+ or Cl− in the measured spectra (Figure 1C). Therefore, the intercept represents background noise. We also prepared a NaCl-agar standard series with the same protocol as above. When the measured Na+ intensities were plotted against added NaCl (0 to 320 mM), the slope of the Na+ regression line (0.25) was nearly half those derived for K+ (0.48) and Cl− (0.42; Figure 2); this indicated that our assay had less sensitivity for Na+. As a result, there was higher uncertainty in the background signal for Na+ concentrations compared to K+ concentrations. Based on the error observed in the Y-intercepts, the detection limits for the elements were 3 mM K+, 8 mM Cl−, and 7 mM Na+. Note that these limits show that the method had high sensitivity, because concentrations of about 10 mM correspond to only 0.01 fmol·μL−1 when converted to the volumes analyzed here. The increase in the standard deviation (SD) with increasing salt concentrations was caused by aggregation of KCl or NaCl (Figure 1D, Figure 2) resulting in large differences between the pixel intensities detected from the areas in the presence and absence of crystals. As a practical note, we would like to add that visualization can be enhanced by adjusting the intensity scale for the image at different concentrations. For example, for image profiles of low K+ concentrations, visualization of the K+ distribution can be enhanced by reducing the maximum intensity from 1000 to lower values, as demonstrated in Figure 3. 2.2. Elemental Images of P. euphratica Root Cells Tissue and cellular studies have repeatedly shown that P. euphratica is able to control K+/Na+ homeostasis under salt stress [26–29]. In this study, P. euphratica plants were cultured in hydroponic Long Ashton nutrient solution. After salt shock with 300 mM NaCl for 24 h, we analyzed K+, Cl−, and Na+ distributions in the root cortex and xylem vessels with X-ray elemental imaging in a TEM, and compared the results with those of untreated controls (Figures 4 and 5). The area measured depended on the cell structures of interest. For X-ray imaging at the cellular level, the actual spatial resolution ranged from 50 nm × 50 nm to 100 nm × 100 nm. Elemental images of agar standards were obtained at the same resolution as that used for the plants (Figures 4 and 5). The measuring time was 10 s for each point. When plants were grown under control conditions, high-spatial resolution elemental images of root cortex cells showed that both Cl− and K+ were higher in the cortical wall than in the vacuole. These semi-quantitative assessments of the elemental concentrations were based on the scales produced by the agar standards (Figure 4). For quantitative estimations, intensities were measured in the areas indicated with the frames in Figure 4, and the data are compiled in Table 1. Based on these data and the calibration curves (Figure 2) we estimated that the cell wall contained 67 mM Cl− and 50 mM K+ and the vacuoles contained 14 mM Cl− and 13 mM K+ in cortical cells of controls. In contrast, the Na+ concentrations in the vacuoles of root cells were indistinguishable from background noise (3.6 mM), and the cell walls contained low Na+ concentrations (19 mM) (frame B, Figure 4a). Given the small volumes measured, it was difficult to quantify low element concentrations precisely, because the signal was not sufficiently distinguished from the background. When plants were exposed to salt shock, they accumulated high levels of both Na+ and Cl− in cell walls and in the vacuoles of root cortex cells (Figure 4b). The concentrations exceeded those used to produce the calibration curves, but we estimated that they would correspond roughly to 386 mM Cl− and 517 mM Na+ in the vacuole and 429 mM Cl− and 586 mM Na+ in the cell wall, assuming a linear relationship between the elemental concentrations and the intensities measured in the high concentration range (Figure 4b, Table 1). The concordance of Na+ and Cl− observed at this resolution indicated that salt aggregates had formed in the vacuole (Figure 4b), due to the non-aqueous preparation technique. Aggregate formation may result in overestimations of salt concentrations at small scales. In contrast to Na+ and Cl−, salt shock lowered the K+ levels in root cortex cells. However, note that the measurements acquired in the frames shown in Figure 4 (Frame A: 36 mM, Frame B: 24 mM) indicated K+ concentrations were well above the background level. We also imaged the vessels in the vascular system of P. euphratica roots at high spatial resolution (Figure 5). The lumina of the vessels of unstressed plants contained K+ and Cl− concentrations at the detection limit, but the estimated Na+ concentration (27 mM) was well above the detection threshold (Figure 5a). In the vessel walls, the analyzed elements were enriched (30–40 mM). After salt shock, the concentrations of Na+ and Cl− in the vessel lumina increased dramatically (197 mM Na+ and 98 mM Cl−), and the concentrations of K+ slightly increased (39 mM, Figure 5b). Vessel walls of salt shocked plants contained about three times higher Na+ and Cl− concentrations than the lumen (Figure 5b). In the lumina, we also detected salt aggregates, most likely caused by the sublimation of water during non-aqueous sample preparation (Figure 5b, Frame A). These aggregates contained estimated concentrations of about 750 mM Cl− and 949 mM Na+. Apparently, at high concentrations salt tended to aggregate; this phenomenon may have led to overestimating salt concentrations in the biological systems, when the analysis was conducted at very high resolution. It is necessary to analyze the whole region of target structure, which includes not only the salt aggregates but also the areas without aggregates. In our analysis, the aggregates filled about half of the lumen of the vessel; thus, we may assume that the true salt concentration in the xylem sap was correspondingly, i.e., twice lower than the measured value. The capacity to maintain K+ homeostasis is crucial for herbaceous and woody species to adapt to saline environments [20,30]. We found that the loss of K+ induced by salt shock in P. euphratica plants was not pronounced in the measured cell compartments compared to the increment of salt ions (Figures 4 and 5). This was consistent with our previous findings that P. euphratica plants exhibit a remarkable ability to retain K+ under saline conditions , presumably due to their relatively high rates of net uptake and transport of K+ . The observed enrichments of Na+ and Cl− were previously reported for P. euphratica roots exposed to high salinity [21,22]. The vacuolar compartmentalization of salts (as detected here in cortical cells) is crucial for P. euphratica to adapt to saline environments . The observed high concentrations in both the cell walls and the vacuoles suggest that exposure to salt shock initially flushed the roots with salt. This notion was supported by the observation of very high salt concentrations in the vessel lumina. In our previous microanalysis, we found that P. euphratica accumulated higher Na+ and Cl− in the cortical walls than in the vacuole, and that the salt concentration declined in the vascular system [21,22]. It is likely that the suberized endodermis of P. euphratica blocked apoplastic ion transport into the inner root, and vacuole sequestration may also have restricted symplastic translocation of ions from the cortex to the xylem [21,22]. However, in those studies, the exposure conditions differed, because the P. euphratica were grown in soil [21–23]. Moreover, a time course analysis of P. euphratica under salt stress showed Na+ accumulation and redistribution over the long term . P. euphratica roots exhibit a high capacity for Na+ extrusion across the plasma membrane under NaCl stress, which results in a Na+ gradient between the apoplast and the symplast [26,27]. However, the present results showed that non-acclimated P. euphratica roots could not avoid symplastic salt accumulation, and high transport into the xylem occurred with exposure to excessive salinity. Recently, specific fluorescent probes have been employed to trace the distribution of elements such as Ca2+, Na+ and others. While these probes are well suited to investigate cell cultures or other thin-walled cell types [17,28,29], they cannot penetrate tissues with thick cell walls. An advantage of the present method is its applicability to these tissues. Moreover, we can measure simultaneously the localization of multiple elements. 3. Experimental Section 3.1. Plant Materials and Treatment Hydroponically cultured plants of P. euphratica (clone B2) were used in this study. The stock culture, obtained from P. euphratica trees grown in the Ein Avdat Valley (Israel), was multiplied by micropropagation . Rooted plantlets were transferred to aerated hydrocultures supplemented with Long Ashton nutrient solution for 3 months . The plants were maintained in a growth room at 24 °C. A 16 h photoperiod was maintained with cool-white fluorescent light, which provided about 200 μmol quanta m−2·s−1 of photosynthetically active radiation. Uniform plants of 80–100 cm in height with 40–60 leaves were subjected to salt shock with 300 mM NaCl for 24 h. Control plants were cultured in Long Ashton nutrient solution without addition of NaCl. 3.2. Root Sample Preparation Our standard procedure for sample preparation followed the protocol of Fritz . Briefly, root apices (ca. 1.5 cm long) were sampled from control and salt stressed P. euphratica plants. The root samples were immediately placed in aluminium sample holders and rapidly frozen in a 2:1 mixture of propane:isopentane, which was cooled with liquid nitrogen. Samples were vacuum freeze-dried at −60 °C for 120 h, and then, slowly equilibrated to the room temperature (22 °C) over a period of 24 h. Then, the samples were stored over silica gel until plastic infiltration and polymerization. Before plastic infiltration, freeze-dried root segments were transferred into vacuum-pressure chambers, and infiltrated with ether at 27 °C overnight. The plastic preparation was a 1:1 mixture of styrene and butyl methacrylate, with 1% benzoylperoxide, stabilized with 50% phthalate. Plastic infiltration was conducted in three steps: samples were incubated in a 1:1 mixture of ether and plastic for 4 h; then, they were transferred to a 1:3 mixture of ether and plastic for 4 h; finally, they were transferred to 100% plastic for 24 h (×2). Following infiltration, samples were transferred into gelatin capsules, polymerized at 60 °C for 12 h, then transferred into an oven and polymerized at 30 °C for at least 7 days. 3.3. Preparation of Agar Standards In this study, we used purified agar as the supporting matrix for standardized element concentrations. Agar is an organic material and may ideally undergo shrinkage similar to that observed in biological tissues during freezing and freeze-drying [7,32]. KCl and NaCl agar standards were established as previously described [7,25]. Briefly, 10 g agar (Sigma, Steinheim, Germany) was suspended in 2 L of de-ionized water at room temperature. To remove ionic impurities, the water was replaced once a day with freshly de-ionized water. After 10 days of purification, the agar was freeze-dried at −60 °C for 96 h and stored at room temperature. The required amounts of salts for the KCl and NaCl series (0, 20, 40, 80, 160, 320 mM) were dissolved in heated water with 5% purified agar, w/v. Then, the solutions of agar with different salt concentrations were poured in 2 mm-thick layers in Petri dishes. Blocks of 1–2 mm in length and width were cut with a razor blade, immediately placed into baskets made of fine aluminum mesh, and rapidly frozen in a mixture of propane:isopentane (2:1) at the temperature of liquid nitrogen. Samples were vacuum freeze-dried at −60 °C for 72 h, and then, they were slowly allowed to equilibrate to room temperature (ca. 22 °C) over a period of 24 h. Agar blocks were vacuum-pressure infiltrated with water-free diethyl ether, and then, infiltrated with plastic as described above. 3.4. Cutting Specimen Sections After polymerization, root and agar samples were cut into 1.0-μm-thick sections with a dry glass knife on an ultramicrotome (Ultracut E, Reichert-Jung, Vienna, Austria). Root cross sections were cut in the region approximately 1.0–1.5 cm behind the root apex, which contained few primary xylem vessels. Slices were mounted onto copper grids (mesh 100), coated with carbon, and stored over silica gel until analysis. 3.5. X-ray Imaging in a TEM Root and agar sections were analyzed in a Tecnai™ TEM (Tecnai G2 Spirit, FEI Company, Hillsboro, OR, USA) equipped with an EDX detector (EDAX International, Mahwah, NJ, USA). Prior to the X-ray imaging, samples were exposed to high beam current to stabilize the sections. However, the exposure was usually less than 5 min which could not cause severe damages. Quantitative elemental images of agar and root specimens were analyzed with TIA software version 3.2 (FEI Company, Hillsboro, OR, USA). TIA Smart Phase imaging provides improved analyses by automatically collecting spectra and generating phase maps with elemental distributions and associated spectra. STEM images of agar and root specimens were acquired prior to X-ray imaging. In the STEM images, root cells with clear cortical structure and xylem vessels were selected for X-ray imaging. Elemental images of the agar and root specimens were obtained at 1200-fold magnification. X-rays were detected with the EDX detector at 80 kV with a low current (beam size was set to 8 in the Tecnai TEM). The operating parameters were as follows: accelerating voltage: 80 keV; take-off angle: 15°, and the time for collecting X-rays was 10 s for each measuring point. Agar samples containing various salt concentrations were analyzed over an area of 1.75 μm × 1.75 μm at a resolution of 35 × 35 pixels. The maximum, mean, and minimum intensity of K+, Na+, and Cl− were measured in the indicated regions (frames) and calculated with TIA imaging and analysis software. For area imaging of root tissues, the actual spatial resolution ranged from 50 nm × 50 nm to 100 nm × 100 nm. Elemental images of agar standards were obtained at the same resolutions. During the period of X-ray imaging, drift corrections were performed automatically every 400 s. The minimum intensity was taken as the background, and the mean intensity was extracted by TEM Imaging & Analysis from the measured regions. We used the following formula to calculate the element peak: element peak = the mean intensity − minimum intensity. With the use of suitable agar standards, we assessed the concentrations and distributions of diffusible elements in biological materials. The TEM-EDX imaging technique was successfully applied to microtome-sectioned samples of uniform thickness. The advantage of X-ray imaging in a TEM is the high-spatial resolution imaging of multiple elements within an area of interest and its applicability to tissues which are unsuitable for the use of fluorescent probes. The elemental images of agar standards provided a feasible means to make simultaneously semi-quantitative estimations of the concentrations of multiple elements in different cell compartments. The extracted mean values from selected regions of elemental images could be used for quantification, because the pixel intensities of K+, Na+, and Cl− were linearly correlated with the concentrations of these elements in the agar matrix. This research was supported jointly by the National Natural Science Foundation of China (Grant No. 31270654, 31170570), the Guest Lecturer Scheme of Georg-August-Universität Göttingen (Germany), the German Science Foundation (Grant No. INST 186/766-1 FUGG), the Bundesministerium für Ernährung, Landwirtschaft und Verbraucherschutz (BMELV) for travel grants, the Research Project of the Chinese Ministry of Education (Grant No. 113013A), the key project for Overseas Scholars by the Ministry of Human Resources and Social Security of PR China (Grant No. 2012001), the Program for Changjiang Scholars and Innovative Research Teams in the University (Grant No. IRT13047), and the Program of Introducing Talents of Discipline to Universities (111 Project, Grant No. B13007). The publication fund of the University of Göttingen and the Deutsche Forschungsgemeinschaft supported open access publication of this article. We thank Christine Kettner and Merle Fastenrath for their excellent technical assistance. Shaoliang Chen and Andrea Polle drafted the experiments, analyzed data and wrote the manuscript. Shaoliang Chen and Heike Diekmann prepared the samples and measured the elements. Dennis Janz analyzed the data. All authors commented and contributed to various versions of this paper. Conflicts of Interest The authors declare no conflict of interest. - Ryan, M.H.; McCully, M.E.; Huang, C.X. Relative amounts of soluble and insoluble forms of phosphorus and other elements in intraradical hyphae and arbuscules of arbuscular mycorrhizas. Funct. Plant Biol 2007, 34, 457–464. [Google Scholar] - McCully, M.E.; Canny, M.J.; Huang, C.X.; Miller, C.; Brink, F. Cryo-scanning electron microscopy (CSEM) in the advancement of functional plant biology: energy dispersive X-ray microanalysis (CEDX) applications. Funct. Plant Biol 2010, 37, 1011–1040. [Google Scholar] - Sauberman, A.J.; Heyman, R.V. Quantitative digital X-ray imaging using frozen hydrated and frozen dried tissue sections. J. Microsc 1987, 146, 169–182. [Google Scholar] - LeFurgey, A.; Davilla, S.D.; Kopf, D.A.; Sommer, J.R.; Ingram, P. Real-time quantitative elemental analysis and mapping: Microchemical imaging in cell physiology. J. Microsc 1992, 165, 191–223. [Google Scholar] - Bidwell, S.D.; Crawford, S.A.; Woodrow, I.E.; Sommer-Knudsen, J.; Marshall, A.T. Sub-cellular localization of Ni in the hyperaccumulator, Hybanthus floribundus (Lindley) F. Muell. Plant Cell Environ 2004, 27, 705–716. [Google Scholar] - Fritz, E. X-ray microanalysis of diffusible elements in plant cells after freeze-drying, pressure-infiltration with ether and embedding in plastic. Scanning Microsc 1989, 3, 517–526. [Google Scholar] - Fritz, E.; Jentschke, G. Agar standards for quantitative X-ray microanalysis of resin-embedded plant tissues. J. Microsc 1994, 174, 47–50. [Google Scholar] - Fritz, E. Measurement of cation exchange capacity (CEC) of plant cell walls by X-ray microanalysis (EDX) in the transmission electron microscope. Microsc. Microanal 2007, 13, 233–244. [Google Scholar] - Marshall, A.T.; Xu, W. Quantitative elemental X-ray imaging of frozen-hydrated biological samples. J. Microsc 1998, 190, 305–316. [Google Scholar] - Marshall, A.T.; Goodyear, M.J.; Crewther, S.G. Sequential quantitative X-ray elemental imaging of frozen-hydrated and freeze-dried biological bulk samples in the SEM. J. Microsc 2012, 245, 17–25. [Google Scholar] - Chen, S.; Li, J.; Wang, S.; Hüttermann, A.; Altman, A. Salt, nutrient uptake and transport, and ABA of Populus euphratica: A hybrid in response to increasing soil NaCl. Trees Struct. Funct 2001, 15, 186–194. [Google Scholar] - Chen, S.; Li, J.; Wang, T.; Wang, S.; Polle, A.; Hüttermann, A. Osmotic stress and ion-specific effects on xylem abscisic acid and the relevance to salinity tolerance in poplar. J. Plant Growth Regul 2002, 21, 224–233. [Google Scholar] - Gu, R.; Fonseca, S.; Puskás, L.G.; Hackler, L.J.; Zvara, A.; Dudits, D.; Pais, M.S. Transcript identification and profiling during salt stress and recovery of Populus euphratica. Tree Physiol 2004, 24, 265–276. [Google Scholar] - Ottow, E.A.; Brinker, M.; Teichmann, T.; Fritz, E.; Kaiser, W.; Brosché, M.; Kangasjärvi, J.; Jiang, X.; Polle, A. Populus euphratica displays apoplastic sodium accumulation, osmotic adjustment by decreases in calcium and soluble carbohydrates, and develops leaf succulence under salt stress. Plant Physiol 2005, 139, 1762–1772. [Google Scholar] - Wang, R.; Chen, S.; Deng, L.; Fritz, E.; Hüttermann, A.; Polle, A. Leaf photosynthesis, fluorescence response to salinity and the relevance to chloroplast salt compartmentation and anti-oxidative stress in two poplars. Trees Struct. Funct 2007, 21, 581–591. [Google Scholar] - Wang, R.; Chen, S.; Zhou, X.; Shen, X.; Deng, L.; Zhu, H.; Shao, J.; Shi, Y.; Dai, S.; Fritz, E.; et al. Ionic homeostasis and reactive oxygen species control in leaves and xylem sap of two poplars subjected to NaCl stress. Tree Physiol 2008, 28, 947–957. [Google Scholar] - Sun, J.; Zhang, X.; Deng, S.; Zhang, C.; Wang, M.; Ding, M.; Zhao, R.; Shen, X.; Zhou, X.; Lu, C.; et al. Extracellular ATP signaling is mediated by H2O2 and cytosolic Ca2+ in the salt response of Populus euphratica cells. PLoS ONE 2012, 7, e53136. [Google Scholar] - Ding, M.; Hou, P.; Shen, X.; Wang, M.; Deng, S.; Sun, J.; Xiao, F.; Wang, R.; Zhou, X.; Lu, C.; et al. Salt-induced expression of genes related to Na+/K+ and ROS homeostasis in leaves of salt-resistant and salt-sensitive poplar species. Plant Mol. Biol 2010, 73, 251–269. [Google Scholar] - Janz, D.; Behnke, K.; Schnitzler, J.P.; Kanawati, B.; Schmitt-Kopplin, P.; Polle, A. Pathway analysis of the transcriptome and metabolome of salt sensitive and tolerant poplar species reveals evolutionary adaption of stress tolerance mechanisms. BMC Plant Biol 2010, 10. [Google Scholar] [CrossRef] - Chen, S.; Polle, A. Salinity tolerance of Populus. Plant Biol 2010, 12, 317–333. [Google Scholar] - Chen, S.; Li, J.; Fritz, E.; Wang, S.; Hüttermann, A. Sodium and chloride distribution in roots and transport in three poplar genotypes under increasing NaCl stress. Forest Ecol. Manage 2002, 168, 217–230. [Google Scholar] - Chen, S.; Li, J.; Wang, S.; Fritz, E.; Hüttermann, A.; Altman, A. Effects of NaCl on shoot growth, transpiration, ion compartmentation and transport in regenerated plants of Populus euphratica and Populus tomentosa. Can. J. Forest Res 2003, 33, 967–975. [Google Scholar] - Chen, S.; Zommorodi, M.; Fritz, E.; Wang, S.; Hüttermann, A. Hydrogel modified uptake of salt ions and calcium in Populus euphratica under saline conditions. Trees Struct. Funct 2004, 18, 175–183. [Google Scholar] - Ma, X.; Deng, L.; Li, J.; Zhou, X.; Li, N.; Zhang, D.; Lu, Y.; Wang, R.; Sun, J.; Lu, C.; et al. Effect of NaCl on leaf H+-ATPase and the relevance to salt tolerance in two contrasting poplar species. Trees Struct. Funct 2010, 24, 597–607. [Google Scholar] - Chen, S.; Olbrich, A.; Langenfeld-Heyser, R.; Fritz, E.; Polle, A. Quantitative X-ray microanalysis of hydrogen peroxide within plant cells. Microsc. Res. Tech 2009, 72, 49–60. [Google Scholar] - Sun, J.; Chen, S.; Dai, S.; Wang, R.; Li, N.; Shen, X.; Zhou, X.; Lu, C.; Zheng, X.; Hu, Z.; et al. NaCl-induced alternations of cellular and tissue ion fluxes in roots of salt-resistant and salt-sensitive poplar. Plant Physiol 2009, 149, 1141–1153. [Google Scholar] - Sun, J.; Dai, S.; Wang, R.; Chen, S.; Li, N.; Zhou, X.; Lu, C.; Shen, X.; Zheng, X.; Hu, Z.; et al. Calcium mediates root K+/Na+ homeostasis in poplar species differing in salt tolerance. Tree Physiol 2009, 29, 1175–1186. [Google Scholar] - Sun, J.; Li, L.; Liu, M.; Wang, M.; Ding, M.; Deng, S.; Lu, C.; Zhou, X.; Shen, X.; Zheng, X.; et al. Hydrogen peroxide and nitric oxide mediate K+/Na+ homeostasis and antioxidant defense in NaCl-stressed callus cells of two contrasting poplars. Plant Cell Tissue Organ Cult 2010, 103, 205–215. [Google Scholar] - Sun, J.; Wang, M.; Ding, M.; Deng, S.; Liu, M.; Lu, C.; Zhou, X.; Shen, X.; Zheng, X.; Zhang, Z.; et al. H2O2 and cytosolic Ca2+ signals triggered by the PM H+-coupled transport system mediate K+/Na+ homeostasis in NaCl-stressed Populus euphratica cells. Plant Cell Environ 2010, 33, 943–958. [Google Scholar] - Shabala, S.; Cuin, T.A. Potassium transport and plant salt tolerance. Physiol. Plant 2008, 133, 651–669. [Google Scholar] - Leplé, J.C.; Brasiliero, A.; Michel, M.F.; Delmotte, F.; Jouanin, L. Transgenic poplars: Expression of chimeric genes using four different constructs. Plant Cell Rep 1992, 11, 137–141. [Google Scholar] - Kriz, W.; Schnermann, J.; Höhling, H.J.; Von Rosenstiel, A.P.; Hall, T.A. Electron probe microanalysis of electrolytes in kidney cells. In Recent Advances in Renal Physiology; Wirz, H., Spinelli, F., Eds.; Karger Press: Basel, Switzerland, 1972; pp. 162–171. [Google Scholar] |Element||Figure 4a: Control||Figure 4b: Salt| |Vacuole (Frame A)||Cell Wall (Frame B)||Vacuole (Frame B)||Cell Wall (Frame A)| |Element||Figure 5a: Control||Figure 5b: Salt| |Lumen (Frame A)||Cell Wall (Frame B)||Aggregates in lumen (Frame A)||Lumen (Frame C)||Cell Wall (Frame B)| © 2014 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
<urn:uuid:405dc951-415e-43c8-8982-038dfea58eb4>
CC-MAIN-2016-26
http://www.mdpi.com/1996-1944/7/4/3160/htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394414.43/warc/CC-MAIN-20160624154954-00098-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902348
8,613
2.59375
3
Building a wind farm in a new country with relatively little exposure to the wind power industry is not for the faint hearted. Doing so means dealing with a different business climate, a new culture, and getting to grips with an unfamiliar (and potentially unforgiving) regulatory environment – all at the same time. Then let's not forget the need for recruitment of local personnel, intensive training of technicians, and streamlining of logistics supply lines into the country. And despite the challenges associated with energy projects in emerging markets, developers are unlikely to get much leeway when it comes to meeting project timelines. One example of a region where project development is especially tough is Latin America. In total Latin America spans over 21,000,000 km2 and is home to a wide range of environments, many of which are ideal for renewable energy production. From the winds of southern Patagonia to Pacific winds hitting Oaxaca in Mexico, the potential for wind power is vast. But in terms of installed capacity, the numbers are still very small. The region's total installed capacity at the end of 2009 was only 1,072 MW. By the end of 2010, the Global Wind Energy Council (GWEC) predicted that the continent's installed capacity could reach 2 GW (advanced scenario). The organisation also believes that – with favourable conditions, again, it's advanced scenario – this could rise to 40 GW by 2020, and 90 GW by 2030 (for more information see GWEC's Global Wind Energy Outlook 2010). And despite major regional diversity, there does appear to be a growing will among Latin American policy makers and business leaders to make countries in the region a much more attractive proposition for renewable energy in general. Wind power in particular has the advantage that it's a good fit for hydropower – Latin America's primary electrical generation method. Mexico moving forward Having the second-largest population in Latin America, Mexico is a natural industrial leader, and despite its fair share of challenges, the country has been responsible for the biggest leap in Latin American wind capacity in 2010. The Mexican Government is also now starting to get behind wind – having set a target of 2.5 GW of wind by 2012. At the end of 2009, Mexico had installed 202 MW of wind power. The first half of 2010 saw another 300 MW installed. Rather than being fed into the grid though, almost all of this capacity is used on site by self-generators, being fed into large businesses and plants. New projects underway Look at some of the names targeting Mexico and this gives some indication of the country's potential. Spanish wind turbine maker Gamesa recently announced it will supply 324 MW to two projects in Oaxaca, an area popular with developers because winds there are strong for most of the year – with capacity factors as high as 40 per cent reportedly being achieved. In fact some believe the area is one of the world's best wind resources with average wind speeds of 12 metres-per-second (m/s). Oaxaca is situated in the narrowest part of Mexico, known as the Tehuantepec Isthmus, a flat area between two coastal mountain ranges that funnels the wind and keeps it strong for much of the year. According to the Mexican Wind Power Association, the region has more than 500 MW in place, around 500 MW in construction, and nearly 1.5 GW planned (source – Windpower Monthly, January 2010). EDF, Clipper enter the fray So, what are the pitfalls for developers wishing to take advantage of this emerging market? A typical example of the Mexican model for wind development in Mexico is the 27 turbine, 67.5 MW La Mata-La Ventosa wind farm, named after two nearby towns, and situated in Oaxaca. The wind farm is the result of collaboration between French utility EDF Energies Nouvelles (EDF EN) and US Clipper Windpower, which supplied its 2.5 MW Liberty turbines. It was developed and built by EDF EN's Mexican subsidiary Electrica del Valle de Mexico (EVM), along with Clipper. The electricity generated is fed to a number of stores owned by retailer Walmart Mexico. EDF EN has a 15-year electricity supply agreement with Walmart that meets ‘self-supply’ regulations in Mexico. The La Mata-La Ventosa wind farm is the companies' first project in Mexico, and won the 2010 Deal of the Year award on behalf of the Export-Import Bank of the United States (the award, presented by President Obama was due to EDF EN securing American-built turbines from Clipper for the La Mata-La Ventosa site). Operation and maintenance Clipper, and EDF EN's US subsidiary, enXco, are responsible for operating and maintaining the wind farm. Clipper employs 13 staff on site, four of which are being groomed for another Clipper wind project at Pinoles in Mexico. Teams of two technicians work on the turbines as well as administrative, inventory and site supervision personnel. EDF EN also employs 9 people at or near the site. They leave the operation of the turbines to Clipper and focus on operating electrical substations, computer monitoring of power production, as well as security and maintenance of the substations. |“Once we saw that we needed to buy local, we found that most of our components were there to be had…” |- Aaron Moeller, Clipper While the contract availability rate is 95%, the Clipper site reportedly averaged 96.1%. The project's turbine – Liberty La Mata-La Ventosa uses Clipper's Liberty 2.5 MW. “We see the Liberty machine with its large-scale capacity and innovative improvements at the top of the scale in terms of industry technology advancement,” said David Corchia, CEO of EDF EN. The turbine is different from traditional designs. Modern wind gearboxes, for example, typically use two-stage planetary and one parallel shaft stage with helical gears, which are attached to a single generator. But the expansion in size of the average turbine necessitates massive gear casings that stretch the capabilities of traditional manufacturers. This also places enormous loads on the bearings. As a result, the three-stage gearing concept suffers failures from excessive loads. Clipper has refined a lightweight two-stage helical design, using four permanent magnet (PM) generators instead of the usual single-wound rotor-induction generator. This was then validated during testing at NREL, later evolving into the company's Liberty 2.5 MW turbine. “Liberty's multiple-drive path design…decreases individual gearbox component loads, which reduces gearbox size and weight,” said Bob Thresher, director of NREL's National Wind Technology Center: “The new generators significantly reduce component mass by eliminating much of the copper that would be required for windings in the rotor. The machine will also take advantage of advanced feedback controls to reduce load excursions in turbulent wind conditions and optimise pitch schedules to reduce drive train loads and improve energy capture,” he adds. Apart from reducing loads, this approach boosts turbine uptime. If one generator fails in a traditional turbine, everything shuts down and repairs can be costly due primarily to the high cost and potential lack of availability of industrial cranes. With four generators, one can be taken off line if there is a problem. Clipper has also developed a way to improve the variable-speed technology used by many modern wind turbines. Variable speed designs maximise energy capture from the wind by continually adjusting the rotational speed of the blade to match prevailing wind conditions. It also harnesses the latest generation of transistors and switches to achieve full power conversion, which is more suited to modern grid requirements such as low voltage ride through and grid stabilisation – at half the cost of full speed conversion. Clipper has accumulated many patents in this area. This all adds up to weight reduction. Many modern gearboxes in MW-scale wind turbines weigh 50 to 70 tonnes. Liberty's gearbox weighs in at only 36 tonnes, including gearbox, brakes and housing. Overcoming lack of expertise One of the problems that Clipper ran into was lack of educational resources in the south of Mexico. While Oaxaca has some educational colleges, there is a general lack of trade schools and no pipeline of graduates already versed in wind industry technology. The company targeted graduates with a technical degree – such as electromechanical and industrial engineering. “We focused on hiring locals from the towns of La Mata and La Ventosa,” said Clipper's Aaron Moeller, the company's EDF fleet manager, “and this willingness to hire from the vicinity created much goodwill”. Training covered all aspects of turbine maintenance, with a heavy emphasis on safety practices. “Experienced technicians apprenticed our staff on standard maintenance and day-to-day operations,” said Moeller. “Since COD, they have remained on site to ensure our local teams know what they're doing.” To overcome the language barrier, Clipper established an ‘English-as-a-Second-Language’ program for its Mexican employees after work. The next phase will be to pull out the American technicians, and let the Mexican team “stand on its own feet”. Investing in the region One other problem the project faced was the supply lines into Mexico. Clipper initially shipped replacement parts and components from the USA. However, that incurred higher taxes as well as transportation costs. And delays were commonplace. In fact Mexico has a large and sophisticated supplier base of its own, and many of the wind farm's routine supplies are available locally, reducing costs and speeding up deliveries. “We realised that we need[ed] to foster relationships with Mexican vendors,” said Moeller. “Once we saw that we needed to buy local, we found that most of our components were there to be had.” Clipper has now established a long-term agreement with EDF EN's Mexican subsidiary for the supply of Liberty 2.5 MW wind turbines for more EDF EN projects in Mexico in the coming years. Currently, Clipper is supplying wind turbines to Penoles in Oaxaca – which will provide power to a mining company. 20 Liberty turbines are currently being erected at the site. “The project, Fuerza Eolica del Istmo, is scheduled to be completed in early 2011,” believes Moeller. All in all, he believes that the La Mata-La Ventosa project has been invaluable as a way of learning about moving into an unfamiliar market: “This is a great platform for us to understand our international expansion needs,” he concludes: “It has enabled us to see where we need to spend time and energy.” Note: This version was adapted from an article by Joe Zwers, who is a freelance writer based in Glendale, California – focusing on business and technology. David Hopwood is the Editor of Renewable Energy Focus. Renewable Energy Focus, Volume 12, Issue 1, January-February 2011, Pages 10-12
<urn:uuid:720493be-1bda-4947-a700-18eb7731d9cd>
CC-MAIN-2016-26
http://www.renewableenergyfocus.com/view/18515/mexico-wind-project-gets-obama-seal-of-approval/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395613.65/warc/CC-MAIN-20160624154955-00168-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950453
2,314
2.65625
3
The November 1898 "Portland Gale" forced the schooner Bertha E. Glover into Martha's Vineyard while carrying a cargo of lime. She sprung a leak; water got into the cargo, and it began to burn. The ship was lost as a result of the lime cargo getting wet. The vessel had been rebuilt in Rockland in 1882 for the lime trade. Dougherty lime quarry in Rockland was one of a number of quarries in the Rockland / Camden area. The steam engine in the bottom of the quarry could power air compressors to drive air drills or cranes to hoist out the cut stone. Lobster pounds are fenced off areas of water where lobsters can be stored while awaiting transportation or a better market. In this 1926 view of a Hancock, Maine, lobster pound, the fence can be seen; men in the foreground are netting up lobsters which can be stored in compartments in the float they are standing on to make them easier to retrieve for shipping. Many lobster pounds are quite large and need boats so that the operator can get around. Here they have a few dories and also a winch set up to help haul in a net. Today lobster pot buoys are made of a hard flotation foam and bought at marine supply stores. Fifty years ago, they were made of wood and had to be turned round on a lathe to give them their shape. Earlier buoys were carved by hand using a hatchet from squared off pieces of wood. Lobster fishing from a dory. Note that the header, or the net opening for lobsters to enter the trap, is at the end of the trap rather than on the sides. There is a mackerel seine boat in the background, steered with a steering oar, along with a nest of more seine boats. In the background a number of fishing schooners lie along a fish pier. This photograph was likely staged, and was taken in Gloucester Harbor, Massachusetts. Hoop nets were used in lobster fishing before wooden lath pots were developed. They were set flat on the bottom with bait attached in the middle of the hoop. These pots needed to be tended frequently, as the bait was the only thing keeping the lobster from leaving.
<urn:uuid:e025a287-b88d-48c1-a696-0f00cc5a12a4>
CC-MAIN-2016-26
http://www.penobscotmarinemuseum.org/pbho-1/type-collection-object/photoimage?page=27
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00099-ip-10-164-35-72.ec2.internal.warc.gz
en
0.979655
466
3.265625
3
This is an obvious statement, yes, but what we’re learning about bullying in the workplace and how it affects the workplace is astounding. And it’s our job in HR to ensure the mental, physical and emotional safety of our employees. First of all, what is workplace bullying? According to the Workplace Bullying Institute, Workplace Bullying is repeated, health-harming mistreatment of one or more persons (the targets) by one or more perpetrators that takes one or more of the following forms: - Verbal abuse - Offensive conduct/behaviors (including nonverbal) which are threatening, humiliating, or intimidating - Work interference — sabotage — which prevents work from getting done Sound familiar? Somewhat like the three prongs of hostile work environment albeit not sexual in nature. That’s because bullying is harassment and the latest information shows that it is just as toxic to the workplace as any form of discrimination or harassment. Per the WPI, mentioned above, those employees who were subjected to workplace bullying suffered mental and emotional harm: Bullying is often called psychological harassment or violence. What makes it psychological is bullying’s impact on the person’s mental health and sense of well-being. The personalized, focused nature of the assault destabilizes and disassembles the target’s identity, ego strength, and ability to rebound from the assaults. The longer the exposure to stressors like bullying, the more severe the psychological impact. When stress goes unabated, it compromises both a target’s physical and mental health. - Debilitating Anxiety, Panic Attacks (>80%) - Clinical Depression: new to person or exacerbated condition previously controlled (39%) - Post-traumatic Stress (PTSD) from deliberate human-inflicted abuse (30% of targeted women; 21% of men) - Shame (the desired result of humiliating tactics by the bully) – sense of deserving a bad fate - Guilt (for having “allowed” the bully to control you) - Overwhelming sense of Injustice (Equity – the unfairness of targeting you who works so hard; Procedural – the inadequacy of the employer’s response to your complaint) It’s not always the subject of the bullying who suffers consequences. A new study from Sweden shows that those who witness workplace bullying are subject to depression and post-traumatic stress disorder (PTSD) The number of men who were bystanders to bullying was larger compared to women. However, the proportion of women who were bystanders to bullying and developed depressive symptoms 18 months later was higher in comparison with men (33.3 and 16.4 %, respectively).” Again, stating the obvious, the best way to avoid such employee impact is to disallow bullying in the workplace. But how? Training managers to manage, not bully, their staff. Encouraging employees to come forward when they are subjected to or witness maltreatment of any sort in the workplace. Putting into place specific policies outlining what workplace bullying is and the consequences for bullying in the workplace. And perhaps, most importantly, advising and assisting company executives in creating and sustaining a culture of creativity over a culture of fear. Perhaps then, we can all go back to work to do what we were hired to do. Our jobs.
<urn:uuid:50bfa839-4f0c-4bbd-92b3-1efe418f23dc>
CC-MAIN-2016-26
http://hrlori.com/workplace-bullying-hurts-everyone/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397213.30/warc/CC-MAIN-20160624154957-00074-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951845
685
3
3
There’s a lot of potential benefit to capturing graywater and stormwater to supplement traditional water supplies, but it doesn’t make sense for everyone, and there are plenty of legal, regulatory and climate-related hurdles in doing so, says Colorado State University’s Sybil Sharvelle. Sharvelle, associate professor of civil and environmental engineering and head of CSU’s Urban Water Center, served on a 12-member national committee charged with addressing the benefits and challenges of stormwater and graywater as supplemental water sources, as the nation faces widespread water shortages and droughts. The National Academies report, released publicly Dec. 16, was two years in the making and provides information on the costs, benefits, risks and regulations associated with capturing these alternative water sources. According to the report, stormwater is “water from rainfall or snow that can be measured downstream in a pipe, culvert or stream shortly after the precipitation event.” Graywater is “untreated wastewater that does not include water from the toilet or kitchen, and may include water from bathroom sinks, showers, bathtubs, clothes washers and laundry sinks.” The report recommends best practices and treatment systems for the use of water from these sources; for example, in many locations with heavy rainfall, it’s possible to store excess water in aquifers for use during dry seasons. In some cases, stormwater captured at neighborhood and larger scales can substantially contribute to urban water supplies. Graywater is best for non-potable uses like toilet flushing and subsurface irrigation. It has potential to help arid places like Los Angeles achieve substantial savings, and it serves as a year-round, reliable water source, according to the report. Larger irrigation systems and indoor reuse systems would require complex plumbing and treatment retrofits that are typically most appropriate for new, multi-residential buildings or neighborhoods for future urban planning. The report cites the Eloy Detention Center in Arizona, which reuses graywater from showers and hand-washing to flush toilets. The facility has observed water savings of 20 gallons per day per inmate. Sharvelle said the need for the report arose before the onset of widespread drought in the western United States. “The use of these resources has been hindered by a lack of national guidance and ambiguous regulations for water quality targets,” Sharvelle said. Sharvelle led an analysis of residential stormwater and graywater use in Los Angeles; Seattle; Newark; Madison, Wis.; Lincoln, Neb.; and Birmingham, Ala., and calculated potential savings for conservation irrigation and toilet flushing. The bottom line is there’s no single best way to use these resources, because whether they’re successful or economically viable depend on a host of factors: legal and regulatory constraints, climate, and source water availability. The report is online, and a webinar is planned for early 2016 to further detail the findings. The study was sponsored by the U.S Environmental Protection Agency, National Science Foundation and other agencies. CSU’s Urban Water Center is part of the university’s One Water Solutions Institute, which seeks to connect CSU’s world-class research with real-world water challenges.
<urn:uuid:1159c86b-ab9d-4cf2-8151-fec522b2bc8e>
CC-MAIN-2016-26
http://source.colostate.edu/expanding-water-supplies-report-shows-benefits-risks-of-stormwater-and-graywater/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397111.67/warc/CC-MAIN-20160624154957-00040-ip-10-164-35-72.ec2.internal.warc.gz
en
0.934139
673
3.140625
3
This volume outlines health issues, facts, strategies, and program resources that relate to factors with well established impacts on health status across the world - alcohol and other drugs, environmental health, and food and nutrition. We have included them because they have been identified as having a major impact on ill health and premature death among Aboriginal people in the Northern Territory. The Aboriginal Health Policy (1996) provides a model for understanding the relationship between ill health, direct causative factors and underlying factors. FACTORS CONTRIBUTING TO ABORIGINAL ILL HEALTH *Note that all the underlying factors impact on the direct causative Aboriginal Health Policy 1996:29 The model shows that health is related to a range of underlying factors, which underpin the direct causative factors which are the content of this Volume - and which, with the exception of relevant health knowledge, appear to be beyond the influence of health service providers. In fact, community health care workers can make an active contribution to these underlying health factors. The model documents established evidence that the impact of underlying and direct causative factors begins before birth. We know that the foundations for health are laid down before birth, and continue to be built during the critical periods of infancy and early childhood development, and through childhood and adolescence. Good health in early adult life is the cumulative legacy of earlier life experiences to be valued, consolidated, and sustained through the years of increasing maturity. A 'life course' approach is particularly relevant to a complex of chronic diseases - diabetes, cardiovascular and renal disease - that is increasing world-wide. Together with chronic respiratory disease, and injury, they are the main causes of the excess death and ill-health needing hospital admission, that are experienced by Aboriginal people in the Northern Territory compared with other NT citizens as a group. These problems are linked by a set of shared and compounding direct causative factors - alcohol, tobacco, environmental health, food and nutrition - whose influence begins in foetal life through their effects on developing organs and metabolic processes, continues across the years - and is amenable to intervention at every point along the life course. The Aboriginal Health Policy model shows that these factors are related in turn, to underlying factors that are cultural, locational, social, and economic. Socioeconomic status has long been recognised as the most powerful determinant of health. It is a broad indicator with many associations, such as educational attainment, employment status, financial means, and where and how people live. This recognition is the basis for social policies which make it possible for families, children, young and older people, and other groups in our community who might otherwise be unfairly disadvantaged, to have the capacity to live a healthy life and participate to their full potential in a society that actively cares for its members. The components that contribute to the association between health and socio-economic status are still being researched and clarified to see if there are interventions which could influence particular aspects and improve health, when changing their economic and social situation in the short term is just not possible. The model identifies a sense of power over, and responsibility for, one's life as an important underlying factor for health. Recent studies have provided evidence of an association between the socio-economic status of people and the degree of control that they feel they have over their lives; and further, that this sense of personal control directly impacts on physical health as well as mental and social health. It appears that the experience of a low level of control over a prolonged period, particularly in the predicament of low control coupled with high demand to cope with problems, can cause persistently elevated levels of 'stress' hormones. While rapid production of these hormones can be vital in situations of acute stress, chronic elevation can generate biological harm - especially elevated blood sugar levels, cardiovascular disease, and their associated complications. Besides material and knowledge based resources and assets, socio-economic status also then appears to reflect the capacity of people to resolve problems that confront them and their associated level of feelings of security or uncertainty, confidence or anxiety, dependency or self reliance, and control over their lives and their health. This domain of 'control' is where all community health care workers can make a difference. There is research evidence that developmental interventions which teach people to problem solve - even starting as early as pre-school children - can produce improvements in both their capacity to take action to solve problems, and in subsequent long term indicators of social function and socio-economic status itself, that are meaningful and lasting. Providers of community care services may not be able to directly change people's socio-economic status, educational attainment, employment or financial status, but we can all work in ways that recognise and reinforce people's capacity to know what is needed and how to do it, and that strengthen their control and their confidence. This volume sets out some facts and some options for action related to direct causative factors of ill-health which Aboriginal people have identified as important - alcohol and other drug misuse, proper food and nutrition, and a healthy environment. They are underpinned by the ways of working discussed in Volume I. These ways of working aim always to strengthen the capacity of people to have power over, and responsibility for, their own lives - which has been identified by Aboriginal people and in the literature, as critical underlying factors for health. Chief Health Officer
<urn:uuid:f989d817-78a1-410b-8462-4967bdfac0b5>
CC-MAIN-2016-26
http://www.nt.gov.au/health/healthdev/health_promotion/bushbook/volume2/introduction.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397565.80/warc/CC-MAIN-20160624154957-00013-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958565
1,085
3.15625
3
Wildcat analyse the situation, role and struggles of women in China from the Cultural Revolution until today. Female Workers under Maoist Patriarchy One may think socialism wiped out the Chinese form of "feudalistic" patriarchy. At least, Maoism improved the women's situation in comparison to the time before "liberation", in the cities as well as on the countryside. After "liberation" in 1949 most urban women did wage labor in state-owned factories or other businesses, while rural women were drawn into the people's communes' labor service. That changed their position in the family, also because due to the low wages in the Mao-era the women's wage was an important part of the family income (Wang: 159). But even though the women were not to the same extent locked up in the house and new laws treated them more or less the same as men, their life took still place in a patriarchal framework. On top of the "traditional" household work they had to do wage work - mostly outside the family or the community of women in which they grew up (McLaren: 171). The socialist regime adopted changed forms of "feudalistic" patriarchy and integrated them into the new forms of social organization. In her book "Gender and Work in Urban China. Women workers of the unlucky generation" author Liu Jieyu follows the fate of some urban female workers of the generation of the Cultural Revolution (age-group born about 1945 to 1960). Women were hit harder than men by the redundancies following the restructuring of the state industries after the mid-1990s. 62.8 per cent of those laid-off were women, but they only constituted 39 per cent of the urban workers (Wang: 161). Liu wanted to find out which factors played a role here and how the women's life under socialism was dictated by the patriarchal structures and social norms. The author, today a lecturer of sociology at the University of Glasgow, grew up in Nanjing, and her mother belongs to those who were fired by their danwei (work unit) in the 1990s. Liu talked to more than thirty women from her mother's generation, nearly all of them unskilled workers, about their experiences and their life situation. Whether during the "egalitarian collectivism" of the Mao-era or in today's "socialist market economy", the interviews show that the women were disadvantaged and discriminated in each phase of their life. History of Discrimination The urban Cultural Revolution-generation - the first one born under "socialism" - saw the central turning points in the history of the People's Republic of China: the "Great Leap Forward" and the following famine at the end of the 1950s and in the early 60s, the "Cultural Revolution" in the 1960s and 70s, the beginning of the reforms and the "One-Child-Policy" in the 1980s, the repression if the "Tian'anmen Movement" at the end of the 1980s and the drastic restructuring of the 1990s. Those women who remember the campaigns of the 1950s and the "Great Leap Forward" have seen the extent of the subsequent famine catastrophe. Their accounts are infused with the contemporary state rhetoric, the official version: The wage labor of women, their breaking out of the households was seen as a sign of liberation and shapes their memory until today. The term housewife (jiating funü) still has a bad tone for them. Liu writes: "Although their mothers went out to work, they were not as liberated as official history would have us to believe. In the workplace, these women's mothers only performed the lower paid jobs in the service, textile and caring industries. Inside the family, the traditional patriarchal pattern still persisted. Interviewees reported that their mothers, sometimes with help from themselves, were in charge of domestic affairs while fathers were mainly breadwinners and decision-makers." (Liu: 27) The women react with bitterness when they remember the preferential treatment of sons (zhongnan qingnü).1 In the early 1950s the regime still encouraged women to have as many children as possible. That added to an enormous population growth. In the families the boys were treated better than girls, and they were more likely to be chosen to receive (higher) education. The girls had to do the housework, including taking care of smaller siblings and the grandparents. That in turn effected their school education. "The women themselves attributed the neglect of their education to traditional 'feudal' attitudes. However, in a labor market biased against girls, investment in a son's education is a rational decision." (Liu: 29) So due to the gendered division of labor and the "traditional" privileging of boys, the women had less chances in life, in getting education - and later on the labor market. During the Cultural Revolution from the mid-1960s onwards there were slogans like "Now the times have changed, men and women are the same", at the same time all feminist demands or references to the special problems of women were denounced. They were seen as "bourgeois" (Honig: 255). The class origin was the decisive factor which determined whether someone was attacked and re-educated or not. For women the main criteria for the class assignment were the (father's) origin and the marriage (the origin of the husband). The children of so-called "class enemies" had to deal not only with the attacks on their parents, but they themselves had problems in school and were excluded from many activities - or they did not want to take part because they were sick of all the attacks and apologies. Elite families that were attacked during the Cultural Revolution could still use their connections to make sure their children received an education or job training, while workers' children - with or (allegedly) without "good family background" - could not finish their education because the schools were closed and the children sent to the countryside. The first wave of children being sent to the countryside took place between 1966 and 1968. The school education or job training of those youth was interrupted or stopped for good. Until today Chinese people say that generation has "learned nothing". The official reason was that the "intellectual youth" (zhishi qingnian) had to be reeducated on the countryside. Actually, there were also other reasons behind it, for instance, the lowering of urban unemployment. But not all children were sent to the countryside. Students at professional schools could stay in the city as well as a small quota from each school class. Parents with good connections also had the chance to keep their children in the city. A second wave was sent away between 1974 and 1976. This time the main criterion was how many children each family had kept in the city and how many were already sent to the countryside. Families with more kids in the city had to send some to the countryside. On the countryside men and women worked in different production teams. Men had to do the allegedly "harder" work. For instance, they had to carry the bags with rice seedlings, while the women had to plant them - often in a squatting position for hours. The hardship of a task was valued by "work points" (gongfen). One woman recalls: "In our place, men's labor was worth 10 points. The worst of them got 8.5 points. The best got 10 points. As for women's labor, the highest was 5.5 points." Another woman says: "We were only worth half labor." (Liu: 34) The women interviewed nevertheless talk about their tough labor and the hardships they endured on the countryside with pride. They use the term "chi ku", literally: eating bitterness. "All of them had no doubt that work was an inevitable part of their life. In this sense, the state campaign positively shaped their gendered identities by enforcing their identity as a worker; but, at the same time, despite the official rhetoric, they had experienced a gendered division of labor at work which rendered them inferior to men." (Liu: 35) In the interviews the women avoid speaking about their own participation in the Cultural Revolution's Red Guards. They underline the chaos, a result of the political attacks and the interruption of school education, but when their own involvement is concerned they appear as "outsider, follower or silent sympathizer" (Liu: 36). "This common avoidance of the label 'Red Guard' in women's memories of the Cultural Revolution is related to the post-Mao depiction of Red Guards as perpetrators of violence, unjustified attacks, and it shows how the women's memories of the past were reconstructed according to the present through a publicly available account." (Liu: 37) Even though the Red Guards' violence was directed against the "class enemies", it was still often "sexualized" and "gendered". Many young women were exposed to sexual assault, on the countryside by local cadres, in the cities by Red Guards and other gangs (Honig: 256, also see Xinran: 160, 185). During the Cultural Revolution women were attacked because they wore fashionable clothes or looked "feminine". The female Red Guards dressed like men. Whoever behaved like a women could be seen as a "backward element" (luohou fenzi). There were cases where women were attacked under the pretense of "sexual immorality". One woman says: "At that time, people were attacked for bad class origin. To women, at that time, people would say, you had 'lifestyle problems' [a euphemism for sexual immorality]. Such lifestyle problems would be a huge blow to you. When they had no reasons to attack you, they would say that you had lifestyle problems. I remembered during the Cultural Revolution, those women who were said to have lifestyle problems wore a string of worn shoes around their shoulders, parading through the streets, being tainted as 'broken shoes' [a euphemism for a loose woman]." (Liu: 38)2 This kind of "morality" also played a role for the control and surveillance of women and their sexuality in the danwei. The first generation of those sent to the countryside returned to the cities after Mao's death in 1976, the second generation after 1978. The year before the high schools' entrance exams were taken up again. Most women did not apply anymore, though. They had missed too many years of education. The first generation was assigned work in the danwei. The second generation finished middle school in the early 1980s. Because of unemployment they did not get work assigned, but were taken over by their parent's (often mother's) danwei. Work in the state combines According to Liu the danwei-leaders played the role of the traditional family patriarch. The Confucian family, theoretically obsolete under socialism, was transformed into different forms of everyday control and discrimination.3 The danwei's family culture - the combination of public and private spheres - added to the strengthening of the gender segregation at the workplace and the gender division in society. "The mobilization of women into the workplace did not bring about the liberation in the way socialist rhetoric claimed. The socialist work unit operated as an arbiter of women's careers and personal lives and continued the patriarchal function of pre-socialist institutions. As a result, women workers were put at a greater social disadvantage than their male counterparts, and lost out in the economic restructuring." (Liu: 86) The "danwei was not gender-neutral; instead, gender was a complex component of processes of control." (Liu: 64) The assignment of work-places always followed the gender lines (without openly expressing this). The gender specific segregation of work was horizontal and vertical. The horizontal segregation describes the difference between "heavy" and "light" industries. Women made up 70 percent in the "light" industries, 20 percent in the "heavy" ones. The workplaces were also separated in "heavy" and "light". Women took the allegedly "light" jobs, but the distinction was arbitrary. "This division of labor took the 'natural' difference between men and women for granted and suggested the underlying assumption that women's 'weak' physique was best suited to 'light' work." (Liu: 42) Men were also rather assigned to jobs that demanded "skills" while women took less skilled jobs. Referring to the cases of two state companies in Guangzhou, Wang writes: "Men were overwhelmingly assigned to technical jobs and women to non-technical, auxiliary, and service jobs, regardless of educational level. This gendered employment hierarchy established women's subordinate position and shaped women's self-definition." (Wang: 159, see also: 168/9) Already in the 1980s there was a trend initiated by the state to transfer women workers to "auxiliary sections" (departments such as cleaning, the canteen, the factory clinic) in order to reduce the labor surplus (Liu: 43). The vertical segregation describes the chances for promotion. In Chinas danwei all employees were either workers (gongren) or cadres (ganbu). Among those who could become cadres were: 1. Ex-soldiers, at least in the rank of platoon leader; 2. graduates from vocational schools or colleges; 3. workers who were promoted. Very few of the soldiers were women. Women were disadvantaged in receiving higher education or professional training. So there was only the last option left. There were three hierarchical levels of cadres, junior cadres, middle-level cadre and senior cadre. Women usually only reached the first level. And those who made it had rather symbolic positions (for instance leader of the Youth League). Another precondition for promotions and for avoiding being laid-off in the 1990s was party membership, and women were also disadvantaged here. The fact that women worked in the low-wage industries and segments was due to this horizontal and vertical segregation.4 Two aspects played a role: biaoxian, literally performance or conduct, here more precisely work performance and politically correct behavior, as judged by the superiors; and guanxi, the contacts and connections with higher employees or functionaries and the delivery of favors. Both are connected since they include forms of pressure, obedience, good conduct and "emotional work". The allocation of wages, benefits and promotions were based on the assessment of biaoxian. Apart from the work performance the social behavior was controlled, so that there was also a moral aspect, that is whether a woman behaves in a proper according to her status, sex and role (for instance as a mother). The guanxi were and are the base for getting the courtesy of the superiors and functionaries. They play a role in all aspects of social life in China, for instance in getting a job or flat, or for promotions. Since women in the danwei had an inferior status, male and female workers tried to build up good contacts mainly to men in higher positions. Women often only had connections to lower cadres, cadres with low influence, "bad guanxi". All in all, women could not pay as much attention to biaoxian and guanxi because they had to deal not only with wage labor but also domestic labor. Furthermore, they often lived in their husband's danwei (or worked there in lower positions), so they often had no network of their own but had to rely on their husband's guanxi. Whenever women could establish good guanxi they often got the reputation - even amongst female colleagues - of trading in sexual services. Men in higher positions, on the other hand, used their status and put sexual pressure on women or molested them. Women had to develop strategies to avoid those situations without finally having male superiors as their enemies, and without gaining a bad reputation among other workers. "The golden rule for women to maintain a good reputation is to avoid close contact with men, which comes into tension with those practices of biaoxian and guanxi." (Liu: 64) Women had limited space to evade that pressure. They stayed ordinary workers until they were sacked. According to Liu, life in the danwei was determined by forms of familiarism. She highlights four aspects: the arrangement of marriages (matchmaking for young people), the allocation of housing (an incentive to marry), the surveillance of family life (to stabilize the marriages) and family planning (i.e. population control). In China the arrangement of marriages (matchmaking) is seen as an honorable and virtuous undertaking. Often many people, cadres and ordinary workers, are involved in arranging marriages for the youngsters. Under Maoism it was also seen as a task of the danwei. Difficulties occurred when a proposed person was turned down or when there were problems during the marriage, because that concerned the relation to the matchmaker who arranged the marriage as well. Women who did not want to marry were seen as "strange". Some married just to escape the social pressure and discrimination. Many Chinese are more tolerant where single men are concerned. The acceptable upper limit for getting married is an age of 25 for women and 35 for men. The allocation of housing (an incentive to marry) was a general problem. Flats were rare and had to be allocated by the danwei. Male workers were privileged. Often only men could apply for a flat. Single men got a place in the dormitory; single women had to stay with their family. The traditional form continued: The woman became part of the family (here: danwei) of her husband. "This housing arrangement in the danwei further reinforced the traditional idea of female dependency in marriage and family life". (Liu: 69) Mothers passed this ideology over to their daughters. They took care of them, until they found work and married. Then they expected the daughter's husband's family to provide a flat (and money for the wedding). In case of marital problems the women had to cope with the living situation. Since they had no flat of their own, they might have to move back to their parents. But even earlier they had problems, for instance because of the long times of commuting to work (in another danwei) or because they had to take their kids to their danwei's-kindergarten. Today there is a market for rented flats but the rents are so high that most women cannot afford them. The surveillance of family life (to stabilize marriages) happened within the danwei. The cadres had an interest in keeping up good relations among workers and other residents. In case of conflicts a "reconciliation committee" or "neighborhood committee" intervened. "Whatever justifications the committees provided to people with grievances, they tried to persuade women to comply with gendered social expectations and to make compromises in order to maintain family harmony." (Liu: 71) For instance, they advised women whose men had extramarital affairs to ask themselves what they had done wrong. Despite all the socialist rhetoric about equal rights in the family, in reality the traditional ideology of gender roles prevailed. In the danwei-housing units women were also controlled by the neighbors, who reported to the committees. Family planning (i.e. population control) in China went through different phases. From the 1950s until the 1970s China saw - supported by government propaganda - high birth rates. The only exception was the period of the "Great Leap Forward" in the early 1960s when the immense work pressure, the precarious supply situation and famines reduced the birth rate. After 1979 public birth control started with the One-Child-Policy. The danwei-leadership controlled the reproductive performances of the female workers. "It is women's bodies that undergo all the processes imposed like close examination, forced abortion, use of obstetric health services." (Liu: 74) Women were supposed to have just one child and to renounce having more for the benefit of the "nation", but paradoxically women could also partially use the One-Child-Policy for their own benefit: Some refused to have more children in order to have more freedom. Others thought (and think) of the One-Child-Policy as just "another sacrifice"5 they had to make for the state (Liu: 76). In the case of the first child being a girl, women were put under pressure. Socialist and traditional patriarchy clashed here: The family expected a boy to continue the family line, the state only allowed one child. Women took the big part of the burden, and their behavior was controlled.6 Liu also discusses the control over time from the perspective of the gendered division of labor. Since the definition of time distinct from wage work time is a manifestation of gender discrimination, she starts with distinguishing four kinds of time: necessary, contracted, committed and free time.7 "Necessary time refers to the time needed to satisfy basic physiological needs such as sleep, meals, personal health and hygiene and sex. Contracted time refers to regular paid work. Time for traveling to work is included here... Committed time encompasses housework, help, care and assistance of all kinds, particularly pertaining to children, shopping, etc. Free time is the time left when the other time activities are removed." (Liu: 76/7) "Time wealth" depends on having appropriate amounts of time, control over time and in having similar time rhythms as other family members. Liu calls that "personal time sovereignty". (Liu: 83). For the women the organization of the danwei again and again created time crises and played a role in upholding the gendered hierarchy. Although the women were doing wage labor and, therefore, had to spend time at work ("contracted time") they were not relieved of the "traditional" task of a "good wife and mother". The majority of the women Liu interviewed had to do machine work in a three-shift system. They were subordinated to the machine time, while men in their workplaces took over jobs that allowed more control over time (day shifts, maintenance, office work...). Women constantly had to solve time crises, caused by the three-shift system with its blurring of day and night, and by the conflicts between "contracted" (work, commuting) and "committed" time (domestic work or "household management", children) (Liu: 79). That usually led to a constant conflict between wage labor and family task, and to exhaustion. Many women changed their work places - regardless of biaoxian and guanxi - often to inferior, lower paid jobs that still gave the women more time. Even though the danwei partially helped the women workers to do both, wage labor and domestic work, these arrangements also meant that women were not seen as "proper" workers. The "family distractions" were one factor in the decision to sack woman first (Liu: 81). Women were also disadvantaged regarding the non-work time (non-contracted time). In the danwei all workers, male or female, had to attend meetings outside of working time, for instance political study sessions. In the 1980s assessment tests were introduced that had to be passed before promotions. Preparing for the tests had to be done in non-work time. Women had more problems to invest time because they were busy with domestic work when not doing wage work. According to a study of the Chinese Women's Federation, women spent 260 minutes a day doing domestic work, men did 130 minutes (Liu: 82).8 Women did not have much time for social activities either. Due to the traditional gender discrimination, the possibilities for married women to socialize with other people were limited. They "virtuously" stayed at home, and they found social relations predominantly during working hours. That is where they exchanged information and formed social networks. However, the main topics of conversation circled around the traditional roles as wives and mothers, further enforcing these roles. Return to house and home In the reform phase after 1978 the income gap widened and the gendered segregation of the new labor market increased. Already from the early 1980s on there were campaigns for the "return home" (hui jia) of urban women. At that time more than ten million "returned youth from the countryside" added to an increasing urban unemployment, and the return of the women to house and home was supposed to reduce it. The women should leave the danwei to increase productivity in the socialist planned economy, too. They were asked to sacrifice themselves again for the "nation" (Wang: 163/4). When with the restructuring in the 1990s, increasingly after 1997, 85 percent of the redundancies were happening in the industrial danwei, the women were hit harder. There are several reasons: Their percentage in the workforce of the industrial danwei was especially high. Sex and age were the critical factors in choosing the workers who were then laid off, not so much education and skill. Many women were just 40 years old when they had to retire and leave their job, men often 50 and older.9 That was backed up by the idea that men can perform better when old than women. When the situation of the company changed (because it got new orders...) men were more likely to be called back or "hired", even when they had to retire earlier. Furthermore, the auxiliary and service departments - where women worked - were the first to be dismantled. The guanxi (connections/contacts) played an important role here. Men had more opportunities to prevent forced retirements, and the financial burdens they brought with them, by using their contacts and connections or asking to be transferred to another department. But Liu also describes how the women she interviewed did not just accept being laid off or retired but searched for ways to defend their interests. They asked to be transferred, called in sick, used their husband's guanxi or went just for the best form of redundancy or retirement. Some women also accepted the dismantlement because afterwards they had more time for their family tasks - as long as it was financially sustainable. In that case their husbands supported it, too. Both, wife and husband, saw the women's work as a source of an additional income, the domestic work was seen as the main responsibility of the wife. But this "choice" was limited. Wang cites a manager who made clear, that they sacked women first because they expected less resistance. He said: "If you lay off men, they will get drunk and make trouble. But if you lay off women, they will just go home and take it quietly by themselves." (Wang: 162) This hints to a strategy of party cadres and factory directors whose main aim was to avoid social conflicts. They calculated that it creates less unrest to fire a woman of a family and not the man. After being laid off the people kept their flat, but not other benefits like medical care. That was especially hard for those women who were "bought out", i.e. who got compensation and whose connection to the danwei was completely cut off afterwards. One former female worker said about that: "We have no connection with our former danwei, they treated us like thrown away rotten meat." (Liu: 107) The laid-off women found little support in the newly adopted forms of the "three guarantees", the small benefit payments for sacked workers. Due to the financial crisis of the danwei and corruption, the "guarantees" did not work. Cut off from state financial support the women had to resort to informal ways that were on the rise since the transformation to a market economy had begun. The decay of the danwei or the women's cutting-off reinforced the family connections the women now had to rely on. In some cases the laid-off women supported each other. The pressure to find a new job was big - partly due to the financial problems after their redundancy, partly because the children were in puberty and the rising costs of education and job training had to be covered. While looking for a job the guanxi again played a major role, the connections to people of power and influence, but also certain forms of "social capital", the women's own networks, for instance with former female colleagues, resources the women could draw on. Women found mainly jobs in the lower segments of the labor market or as precarious street sellers, result of their former low social status and comparably "bad guanxi".10 "Women with poor social capital were trapped in a vicious circle of low-paid, unskilled part-time work providing only further poor social capital. Former cadres were able to maintain their social positions; the workers were vulnerable to downward mobility" (Liu: 115). The gendered networking reproduces the segregation of the labor market. The laid-off women were too old for the newly created job in "private" services, their skills were too low, and they weren't young and charming enough. Young and attractive women who pushed onto the labor market from the countryside or just after finishing school got these jobs. While women, considering all the problems, often accepted low paid jobs, men often refused them because they saw it as undignified to do lower jobs with a bad reputation. In some cases women did not search for new jobs because of their duties and domestic work. "She became a full-time family servant", writes Liu about one woman. (Liu: 115). Most women had to take care not just of their own family but were also used as an unpaid laborer by members of the extensive family. The women Liu interviewed were for the most part doing wage labor, but none of those working in the private economy had a work contract or regulated working hours doing part time work. Many were molested and insulted by their bosses. Those self-employed lost money and were harassed by the authorities. That produced a kind of nostalgia for the former situation in the danwei, especially for the social "security" at that time. Only those few who had started a successful career considered the restructuring and social transformation positive because they appreciated the new "liberties". The following generation Liu interviewed the women's daughters, too. Most of them were born after the beginning of the One-Child-Policy. Different from the experience of their mothers, they were the center of attention in their families. The "traditional" Chinese family was parent-centered, that is, the needs of the parents stand above those of the children. Children should pay respect and honor their parents. When the first One-Child-generation grew up this old constellation bit by bit collapsed.11 In the danwei the One-Child-Policy was strictly imposed,12 so that many families could have just one daughter. Subsequently the educational gap between boys and girls was partly closed. Many women from the "unhappy generation" who had enjoyed little education and experienced many setbacks in their lives invested a lot in the development and training of their daughters "to realize vicariously their unfulfilled dreams" (Liu: 126). The work around the children still lay on the mothers' shoulders, the fathers stayed away from it. In some families the mother dealt with all aspects of life, the father just with educational questions. Mothers tried to adapt their own labor to the needs of the child, for instance by changing from rotating shifts to day shifts in order to have more time for the child - even if that involved accepting disadvantages at work. The "unhappy generation" of women suffered from three burdens: They had to "pay honor" to their own parents and care for their needs, they did everything for their child(ren), and they had to answer to the demands of their husbands. After being laid-off by the danwei - their "return home" - they temporarily or ultimately became full-time mothers. The daughters liked that because their mothers had more time for them and cooked regularly. The daughters accepted that their mothers were sacked as unskilled workers. They considered that as a necessary sacrifice of the old generation during the transformation to a market economy. For them the "society" with its interests stood above the "individuals". They supported the reforms although they were responsible for the fact that their mothers lost their job and the security of the danwei. And they accepted to the official slogans and explanations that justified the social hardships that accompanied the reforms: stimulation of self-initiative, support of young employed people by domestic helpers from the danwei, make space for the young workers. The daughters know what the mothers hoped for and expected from them, and they are very ambitions themselves. "The daughters' desire for success reflects the values of competition and efficiency which have been highly promoted in the changeover to the market economy." (Liu: 133) The daughters by no means want to repeat the past of their mothers. While for the mothers, their wage labor was just a job, and promotion and career was not important, the daughters are different. They think about their personal development. They do not want to sacrifice themselves for the family, they do not want to live for their children (or their parents) (Jaschok: 122). Nevertheless, the daughters partly use the services of their mothers who take care of their grand-child while the daughters lead their own life and use their time in a different manner. The daughters do not want to sacrifice themselves for the family, but they leave their mother in exactly that position.13 While only few mothers recognized gender discrimination as the reason for their lay-off and linked disadvantages to biological differences, the daughters were rather conscious about gender disparities. The daughters experience discrimination on the labor market, sexual harassment and violence that limit their space and opportunities. "The wider social constraints on woman are pervasive in post-Mao China" (Liu: 135) The young women have their own goals, they plan their careers. They emphasize their independence - but at the same time they expect a future with a "breadwinner" husband for their nuclear family. Liu refers to Maria Jaschok here: "Jaschok interpreted the 'awakening desires [of young women] to change and adapt' more as 'a modernization of established patterns than as an experimentation with alternative life-styles'" (Liu: 135/6; Jaschok: 126). And Liu adds: "The daughters seemed to hold dual values, which were infused by past and present, tradition and modernity; the contradictions in their values were representative of the tensions and frictions arising from these oppositional ideologies" (Liu: 136). They have to bring together individualist and collectivist orientations. They want a modern and independent life without sexist discrimination, but they hold on to the "promise of happiness" through marriage and having children.14 Liu's research shows that proletarian women - especially the older ones - had (and still have) to pay a big part of the costs of the economic reforms in China. The lay-redundancies of women from the danwei was the result of "the culmination of a lifetime of gender inequalities" (Liu: 143), from the Great Leap Forward until today. Worse educational opportunities, more burdens in the households and families, more pressure in everyday life, stricter surveillance of personal behavior, close control of sexuality and reproduction, less chances for promotion at work, a limited social network, lower wages: the list of results of structural and personal discrimination of women is long. Still, the women of the "unhappy generation" hold on to beliefs of the "natural difference between men and women" and the "feminine" readiness to make sacrifices. They cannot just get rid of the patriarchal heritage of Confucianism, patrilineality15 and the strict control of chastity and monogamy of women. And even though their daughters are trying to find their own way, they have not broken completely with the "traditional" concepts. However, what is left is the hope that the young women will successfully fight for more control over their own life. Honig, Emily (2002): Maoist Mappings of Gender: Reassessing the Red Guards. In: Brownwell, Susan/Wasserstrom, Jeffrey N. (2002): Chinese Femininities, Chinese Masculinities: A Reader. Berkeley, Los Angeles/London Jaschok, Maria (1995): On the Construction of Desire and Anxiety: Contestations Over Female Nature and Identity in China's Modern Market Society. In: Einhorn, Barbara/Yeo, Eileen James: Women and Market Societies: Crisis and Opportunity. Cambridge Lipinsky, Astrid (2006): Der Frauenverband und die Arbeit im Privathaushalt. In: Lipinsky, Astrid, Der Chinesische Frauenverband. Eine kommunistische Massenorganisation unter marktwirtschaftlichen Bedingungen. Bonn, S. 215-254 Liu Jieyu (2007): Gender and Work in Urban China. Women workers of the unlucky generation. London/New York McLaren, Ann (2004): Women's Work and ritual space in China. In: McLaren, Ann (ed.): Chinese Women - Living and Working. London/New York Pun Ngai/Li Wanwei (2006): Shiyu de husheng. Zhongguo dagongmei koushu. Beijing (German: dagongmei - Arbeiterinnen aus Chinas Weltmarktfabriken erzählen. Berlin, 2008) Solinger, Dorothy J. (2002): Labour Market Reform and the Plight of the Laid-off Proletariat. In: China Quarterly, No. 170, 2002 Wang Zheng (2003): Gender, employment and women's resistance. In: Perry, Elizabeth J./ Selden, Mark: Chinese Society, 2nd Edition. Change, conflict and resistance. London/NY Xinran (2003): The Good Women of China: Hidden Voices. London (German edition: Xinran: Verborgene Stimmen. Chinesische Frauen erzählen ihr Schicksal. München 2005) Zuo Jiping (2006): Women's Liberation and Gender Obligation Equality in Urban China: Work/Family Experiences of Married Individuals in the 1950s. Relations Centre, RSPAS, The Australian National University and St. Cloud State University, Minnesota, USA. Online: http://rspas.anu.edu.au/grc/publications/pdfs/ZuoJ_2006.pdf (called up on 25 June 2007) 1 The correspondent term for "adults" is nanzun nübei, roughly: Women are inferior to men. These sexist slogans are part of the (neo-)Confucian pulp that still gums up many social discourses in China. 2 Until today many Chinese use this term. For instance, divorced women, in particular those with children, often have problems finding a new partner, because they are seen as "worn shoes". Getting a divorce in China today does not promise (new) independence but loneliness, economic insecurity and gossip (see Jaschok: 119). 3 It was not just the patriarchal feudalistic structures that were adopted (something that happened in other Asian countries, too). New versions of the imperial governmental units in China, from the mandarins down to the village heads, can also be found in the socialist structures. 4 Still, Wang points out that one reason for the acceptance of the gendered assignment of low-skilled jobs to women lies in the fact that the difference in wages and benefits in a danwei was rather small - in accordance with the egalitarianism of the Maoists. Another factor was that the situation of the urban women working in a danwei was far better than those of rural women. (Wang: 160). 5 On the Confucian and nationalist-socialist background of the notion of sacrifice (for the emperor, the state, the party, the family) see Zuo: 16. 6 China today has far more males than females because many parents make a sex test before birth - and if it is a girl they abort the fetus. The relation between men and women is around 117 males to 100 females. 7 Here Liu refers to Davies, K. (1990): Women, Time, and the Weaving of the Strands of Everyday Life. Aldershot: Avebury. 8 Lipinsky writes that in 2001, 85 per cent of all families the women were "responsible for cooking, washing clothes, washing up dishes, tidy up, cleaning and other domestic tasks". Women spent 4 hours a day on domestic work, men 2.7 hours. This average number includes countryside and cities. Looking at cities alone men do just 1.7 hours domestic work per day (Lipinsky: 224). 9 Sometimes the age was 45 and 55; the official retirement age is 50 (women) and 60 (men). 10 They worked, for instance, as domestic helpers or taxi drivers. See the article on domestic helpers in China on the website http://www.wildcat-www.de/dossiers/china and the review of the film "The Taxi-sisters of Xi'an" in the German edition of "Unruhen in China", page 77. 11 In the public discourse - which is dominated by the party and the older generation - there are still many allusions to the obedience towards the parents, the past few years even with an open reference to reactionary Confucian doctrines. 12 That was not and is not the case in all areas and social groups in China. 13 That attitude of children of workers who by no means want to become workers themselves, but also of parents who want something "better" for their children, can be found anywhere on the planet. Whether the children manage to escape the "dirty" jobs is a different question. 14 The dagongmei, young women who migrate from the countryside to the cities to work in the factories, hold similar attitudes (see Pun/Li 2006). 15 The term for a patriarchal system in which one belongs to one's father's lineage; involving the inheritance of property, names or titles through the male line. Article on the struggles of migrant workers in China from wildcat-supplement "Unrest in China", wildcat #80, winter 2007/08 www.Prol-Position.net
<urn:uuid:ea516dd8-4ec4-46eb-830f-250c178b31d6>
CC-MAIN-2016-26
http://libcom.org/history/1949-2007-women-workers-china
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398209.20/warc/CC-MAIN-20160624154958-00039-ip-10-164-35-72.ec2.internal.warc.gz
en
0.978769
8,724
3.5
4
The Good News About Breast Cancer When I was a medical student we never used the words cancer and cure in the same sentence unless we were talking about the far off "year 2000". Well, 2000 has come and gone and now--that's changed! Now, more than 95% of women diagnosed with early stage breast cancer will survive at least 5 years-considered a "cure" in the eyes of cancer specialists. Great strides have also been made in treating later stage cancers and extending both length of life and quality of life for breast cancer survivors. Despite this good news, breast cancer remains one of the most feared diseases of all women. Women are scared by the misquoted statistic that one in eight women will get breast cancer; that means one in eight women who live to be 93. To optimize your chances of beating breast cancer, practice monthly BSE, see your health care provider for an annual breast exam, and have your mammogram each year after age 40 or as recommended by your doctor. When diagnosed and treated early, breast cancer can be cured. . .And that is really good news. For more information, click here. Created: 10/3/2001  - Donnica Moore, M.D.
<urn:uuid:9acd4a6b-95c0-4123-bd7d-b13236cf74ff>
CC-MAIN-2016-26
http://drdonnica.com/radio/00004006.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392069.78/warc/CC-MAIN-20160624154952-00038-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94635
264
2.53125
3
July 21, 2010 Bone Cells’ Branches Sense Stimulation, When To Make New Bone A long-standing question in bone biology has been answered: It is the spindly extensions of bone cells that sense mechanical stimulation and signal the release of bone-growth factors, according to research from The University of Texas Health Science Center at San Antonio. The study, reported this week in Proceedings of the National Academy of Sciences of the United States of America, offers an important clue for developing therapies to treat the bone-thinning disease osteoporosis and bone loss associated with aging, said Jean Jiang, Ph.D., senior corresponding author from the Department of Biochemistry, UT Health Science Center Graduate School of Biomedical Sciences.Sensitive extensions "Osteocytes are the most abundant cells in bone," Dr. Jiang said. "In the field of bone biology, there was a long-standing debate as to which part of the osteocyte senses mechanical loading. In this study, we demonstrate for the first time that it is the extensions, which are called dendrites." Regular physical exercise is highly beneficial in maintaining bone health and in prevention of bone loss and osteoporosis. Mechanical stimulation of the bone through weight bearing is critical for promoting bone remodeling, said Sirisha Burra, Ph.D., lead author from the Department of Biochemistry. "Maintenance of bone health depends on the osteocytes' ability to sense the stimulation," Dr. Burra said. "If osteocytes lose this ability, it could possibly lead to diseases such as osteoporosis. Hence, it is important to understand this mechanism." The Health Science Center collaborated with Southwest Research Institute in San Antonio to estimate the mechanical impact of force applied to the dendrites. Magnitudes of mechanical stress were determined. "Understanding how bone cells sense and respond to mechanical signals within the skeleton is an inherently multidisciplinary problem," said co-author Daniel P. Nicolella, Ph.D., institute engineer in the Mechanics and Materials Section at Southwest Research Institute. "We determined the mechanical stresses applied to the osteocytes in these experiments so that they can be compared to the mechanical signals predicted to occur within the skeleton during routine physical activities." Toll of osteoporosis Approximately 8 million women and 2 million men have osteoporosis in the U.S. Affected bone becomes brittle and can fracture with minor falls. In severe cases, a bone can even break from a sneeze. Another 34 million Americans are estimated to have low bone mass and are at higher risk for osteoporosis. (Source: National Osteoporosis Foundation http://www.nof.org/osteoporosis/diseasefacts.htm) Bone loss has been observed in astronauts who have spent a long time in space. Greater understanding of the process of bone remodeling could also aid in the discovery of solutions for the degenerative joint disease osteoarthritis. Apart from its clinical implications, the study is intriguing because it "brings in a novel thought that different parts of a single cell can have different material and sensory properties," Dr. Jiang said. "Different parts of the cell can react differently to the same stimulus. This is a very important fact to consider while studying cellular signaling and regulatory mechanisms." On the Net: - University of Texas Health Science Center at San Antonio - Proceedings of the National Academy of Sciences
<urn:uuid:b508c199-6158-4e5e-8046-d0491f408592>
CC-MAIN-2016-26
http://www.redorbit.com/news/health/1894532/bone_cells_branches_sense_stimulation_when_to_make_new_bone/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00138-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938909
704
3.03125
3
To see the Universe in full, astronomers have to get creative. They combine multiple photos taken by different cameras to make one colourful picture. For example, in this beautiful new picture of a star-forming cloud, the space telescope called Chandra only captured the purple regions. Meanwhile, another space telescope called Spitzer saw things a bit differently when it observed the same cloud - everything shown here other than the purple bits! But why don't these two telescopes see the star-forming cloud in the same way? The answer lies in the type of light that the telescopes are designed to observe. Our eyes can only see visible light. But there are many other types of light that can be detected by special telescopes, such as infrared, ultraviolet and X-ray. For example, the Spitzer telescope detects infrared light. Spitzer is perfect for observing dusty star-forming regions, as infrared light can travel through the dust. The Chandra telescope, however, can't see infrared light. Instead, Chandra can detect the X-ray light that is given off by gas when it is heated to incredibly high temperatures by hot, young stars. So, although the two telescopes give a different tale about what they see, they're both telling the truth! Cool fact: The hot gas in this image (shown in purple) has a scorching temperature of 10 million degrees Celsius! |Children & Online Privacy|
<urn:uuid:f80778cd-fdbf-466a-8f99-58cf1582d0a8>
CC-MAIN-2016-26
http://chandra.harvard.edu/photo/2011/ngc281/kids.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00041-ip-10-164-35-72.ec2.internal.warc.gz
en
0.937376
282
3.921875
4
HUM 2000 A1 July 25, 2013 Edgar Allan Poe Compared to Robert Frost When comparing Edgar Allan Poe’s “The Raven” to Robert Frost’s “The Road Not Taken” it seems that there are plenty of obvious similarities that are on the surface and there are subtle differences that one can find when they truly look deep into the meanings of things. In both poems the speaker is putting all meaning into what they are seeing. The speaker in “The Road Not Taken” is viewing what is in front of him, ready to make an important decision in his life. He is viewing the roads as a paramount decision to make in his life. In “The Raven” the speaker is watching the raven that has enters his room, giving it major importance in what he is going through. In both poems the objects that lay in front of the speakers are devices, they are metaphors given the utmost importance. Both speakers are haunted by what has happened in their life and what could happen based on the decisions that lie in front of them. The overall tone is the difference in Robert Frost to Edgar Allan Poe. You can look at any poem that either author has written and see this. Robert Frost dealt with the trials and tribulations that life throws our way, just as Poe did. At times Frost is dark and cynical about life, but overall he is an optimist and still sees beauty in life. Poe is the antithesis of this. Poe is inherently dark and gloomy in his work. In “The Road Not Taken” Frost’s speaker is given a choice. He’s at a fork in the road in his life. He’s seen the path he normally takes, it’s safe, but has not made him as happy as he wants to be in life. The other road is dangerous. It comes with many risks and potential pitfalls, but he feels ready to take on this challenge now. He understands this road won’t be easy, but believes that anything worth having must have hardships along the way. Life and taken the safe road has taught him this. It is an...
<urn:uuid:c7dd5bae-f3d9-403d-8f3d-6846612ab2bb>
CC-MAIN-2016-26
http://www.studymode.com/essays/Comparison-Of-Poe-And-Frost-1856126.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00035-ip-10-164-35-72.ec2.internal.warc.gz
en
0.987926
442
2.765625
3
National Drowning Prevention Alliance Standards for Water Smart Babies Lessons All standards are based on current research and best practices. State Permitted Facility - The pools must be permitted by local government agency that oversees aquatic facilities. - The pool, deck, bathrooms, changing rooms must be clean and sanitary. - Babies need warm water to have a successful experience. The water temperature of 97°-93°F is recommended. - It is suggested that wet suits be used for temperatures lower than 90°F. - All babies must wear an approved, snug-fitting swim diaper in the pool. Small Class Ratio - There should be a 1 to 6 instructor/child ratio with a parent or caregiver holding the child and participating in the lesson. - Instructors must have a national swimming certification and a basic course in infant water safety. - They should be certified in CPR and First Aid for the professional rescuer. - Background checks should be required for every instructor and staff member at an aquatic facility. Nurturing Instructional Style - A good program builds upon a child’s successes. - Instruction style should always be nurturing, positive and supportive. - Instructors should be patient, gentle and enthusiastic to be successful. Water Safety Curriculum - Water safety skills should be taught. - The lesson plans should be ordered in a step-by-step plan of development where one skill is built upon another in proper order. The skills should be practiced until they are mastered. - The parent should know exactly what is expected of the child in each level of the program. Please download our Prescription Form and email or fax it to us.
<urn:uuid:e2d3a57a-d24a-4a21-9bca-52e5282e788f>
CC-MAIN-2016-26
http://www.watersmartbabies.com/how-to-participate.php
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00021-ip-10-164-35-72.ec2.internal.warc.gz
en
0.892764
352
2.59375
3
Enter Bird's Name in Search Box: There are at least 19 species of owls in North America. Sixteen types of these owls have been seen in Canada. They are the Barred Owl, Barn Owl, Boreal Owl, Burrowing Owl, Flammulated Owl, Great Gray Owl, Great Horned Owl, Long-eared Owl, Northern Hawk Owl, Northern Pygmy-Owl, Northern Saw-whet Owl, Short-eared Owl, Snowy Owl, Spotted Owl, Western Screech-Owl and the Eastern Screech-Owl. The Elf Owl, Whiskered Screech-Owl and the Ferruginous Pygmy-Owl are southern species and are more likely to be seen in Mexico. These different species of owls are unique in themselves, as some are capable of living in the arctic tundra where they hunt during the day while others hunt only at night. Because the majority of owls hunt at night, most owls are only heard and seldom seen. Some hunt their prey while perched on trees or from vantage points and pounce upon their unsuspecting prey. Others will glide over fields in the same manner as hawks. While some are only able to live in certain habitats, others adapt to different landscapes and are quite able to survive.
<urn:uuid:7be8210d-f089-4c1f-b8b8-455a27119486>
CC-MAIN-2016-26
http://www.birds-of-north-america.net/owls.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00163-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947798
272
3.546875
4
“Computer forensics (sometimes known as computer forensic science) is a branch of digital forensic science pertaining to legal evidence found in computers and digital storage media. The goal of computer forensics is to examine digital media in a forensically sound manner with the aim of identifying, preserving, recovering, analyzing and presenting facts and opinions about the information.” Nonetheless, an IT auditor should refrain from providing an opinion on results obtained through agreed-upon procedures unless required to testify in court proceeding. Whether target data are in transit or at rest, it is critical that measures be in place to prevent the sought information from being destroyed, corrupted or becoming unavailable for forensic investigation. When evidence is at rest, adequate procedures should be followed to ensure evidential nonrepudiation. Volatile data capture assists investigators in determining the system state during the incident or event. Consequently, the utilization of functionally sound imaging software and practices is essential to maintaining evidential continuity. “View Part I of the Irregularities and Illegal Acts Agreed-Upon Procedures Assessments series here“
<urn:uuid:46569387-65a1-408e-a22f-85d412315324>
CC-MAIN-2016-26
http://itknowledgeexchange.techtarget.com/it-governance/irregularities-and-illegal-acts-agreed-upon-procedures-assessments-part-iii/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393093.59/warc/CC-MAIN-20160624154953-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.913368
217
2.90625
3
When many Mainers think of "cybersecurity," they probably remember the 2008 HANNAFORD SECURITY BREACH, when 4.2 million credit- and debit-card numbers were stolen from shoppers at the grocery chain's stores. What received little coverage amid the hype about the vastly overstated threat of identity theft (only 1800 accounts were actually used to make fraudulent charges — 0.04 percent of the stolen numbers) was that the breach was the first documented case of a new way of stealing this kind of information. Previously, most security breaches resulting in theft of credit-card, bank-account, or even Social Security numbers had come from a single incident — either a physical theft of a computer or drive containing that information, or by connecting to a computer via the Internet and breaking through whatever security it might have in place. (This happened, for example, to THE UNIVERSITY OF MAINE HEALTHCARE CENTER'S COMPUTERS in June, when an unauthorized person accessed data on about 4600 students who had sought mental-health help at the university.) But Hannaford's data was stolen over the course of several months, during transmission of the data from store cash registers to the system that the company used to verify card transactions. This process takes only seconds, as shoppers know, and became a target for thieves because protection had been beefed up on physical computers and their electronic defenses. The fact that some credit-card information is not encrypted when traveling over private corporate networks remains an issue for retailers, banks, and credit-card companies to resolve. (When traveling over public networks, the data must be encrypted.) Also, the Hannaford hack was claimed by some to be an inside job — and there's little defense against data theft by a person who is allowed into a data center. Most Mainers likely do not know that THE MAINE LEGISLATURE'S WEB SITE WAS HACKED just three months ago, resulting in some mild confusion about the lawmaking process. Specifically, the site's ability to designate the status of bills moving through the Legislature — including keeping users up-to-date on amendments and voting — was modified so that a user who clicked on various links would be taken to a Web site that would attempt to download viruses or other harmful software onto a user's computer. State computer-support staff took the site offline entirely for several days while they fixed the security hole and reloaded correct information into the database. This went largely unnoticed because the Legislature was not in session at the time.
<urn:uuid:2aaa4556-1398-49de-9e7d-70a50a1f3071>
CC-MAIN-2016-26
http://thephoenix.com/Boston/news/107714-maine-breaches/?rel=inf
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397428.37/warc/CC-MAIN-20160624154957-00137-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962246
521
2.546875
3
Quality Lies in the Details SIDEBAR 4: Capacitors & Capacitance Capacitance is associated with "capacitive reactance," which can be thought of as frequency-dependent resistance. As frequency increases, the capacitance produces less and less "reactance," or resistance to current flow. The higher the frequency, the lower the opposition to current flow produced by the capacitance. A cable's capacitive reactance can be thought of as a resistor in parallel with the load. As frequency increases (and the capacitive reactance decreases), more and more of the drive signal is dropped across the cable's capacitance, and less across the load (the component we're driving). This is why cable capacitance should be kept low; the frequency at which rolloff begins is higher for lower-capacitance cable. To calculate the approximate -3dB low-pass point for a cable/source component combination, use the formula f = 1/(2PiRC), where Pi = 3.142, C is the value of the shunt capacitor in farads, and R is the resistance in ohms of the source component's output resistance. With a high source impedance of 2k ohms (2000 ohms) and a cable capacitance of 3nF (3000pF), the -3dB high-frequency rolloff point will be 26kHz, resulting in an audibly dulled top octave. With a more typical source impedance of 100 ohms, the -3dB point with the same cable moves up to a completely innocuous 530kHz.—Robert Harley & John Atkinson
<urn:uuid:31be65ff-b6ec-43aa-af5e-ac86db1fc90a>
CC-MAIN-2016-26
http://www.stereophile.com/content/quality-lies-details-sidebar-4-capacitors-capacitance
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392099.27/warc/CC-MAIN-20160624154952-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.917075
327
2.84375
3
The Joys of Being a Teacher Actual answers and spelling on a 6th grade history test: 1. Writing at the same time as Shakespeare was Miguel Cervantes. He wrote Donkey Hote. The next great author was John Milton. Milton wrote Paradise Lost. Then his wife died and he wrote Paradise Regained. 2. Delegates from the original 13 states formed the Contented Congress. Thomas Jefferson, a Virgin, and Benjamin Franklin were two singers of the Declaration of Independence. Franklin discovered electricity by rubbing two cats backwards and declared, "A horse divided against itself cannot stand." Franklin died in 1790 and is still dead. 3. Abraham Lincoln became America's greatest Precedent. Lincoln's mother died in infancy, and he was born in a log cabin which he built with his own hands. Abraham Lincoln freed the slaves by signing the Emasculation Proclamation. On the night of April 14, 1865, Lincoln went to the theater and got shot in his seat by one of the actors in a moving picture show. They believe the assinator was John Wilkes Booth, a supposingly insane actor. This ruined Booth's career. 4. Johann Bach wrote a great many musical compositions and had a large number of children. In between he practiced on an old spinster which he kept up in his attic. Bach died from 1750 to the present. Bach was the most famous composer in the world and so was Handel. Handel was half German half Italian and half English. He was very large. 5. Beethoven wrote music even though he was deaf. He was so deaf he wrote loud music. He took long walks in the forest even when everyone was calling for him. Beethoven expired in 1827 and later died for this. 6. The nineteenth century was a time of a great many thoughts and inventions. People stopped reproducing by hand and started reproducing by machine. The invention of the steamboat caused a network of rivers to spring up. Charles Darwin was a naturalist who wrote the Organ of the Species. Madman Curie discovered radio. And Karl Marx became one of the Marx Brothers.
<urn:uuid:ff4b7292-327d-47b6-b001-d097fb3689da>
CC-MAIN-2016-26
http://www.cartalk.com/content/joys-being-teacher-0
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00101-ip-10-164-35-72.ec2.internal.warc.gz
en
0.991276
439
3.171875
3
James, P S B R (1989) Introduction. CMFRI Bulletin, 43. pp. 1-8. The Union Territory of Lakshadweep, consisting of several inhabited and uninhabited islands, lie between 08°00'N and 12°30'N latitudes and 7r00'E and 74°C0'E longitudes. The remoteness of the island territory from the mainland has forced the inhabitants to live in isolation amidst injustice, poverty, ignorance and ill health. Coconut and tuna formed the mainstay of the economy of the islanders. The lagoons and the surrounding waters are replete with a wide variety of flora and fauna. The tunas and the food fishes were being exploited ever since human settlement. The islands became a Union Territory of India in 1956. Since then there has been rapid progress especially in the fields of agriculture, fisheries, education, health etc. Next in importance to agriculture, the fisheries sector, plays an important role in the economy of the islands. |Uncontrolled Keywords:||Lakshadweep; Marine Fisheries| |Subjects:||Fish and Fisheries| |Divisions:||CMFRI-Kochi > Biodiversity Subject Area > CMFRI Brochures > CMFRI-Kochi > Biodiversity CMFRI-Kochi > Biodiversity |Depositing User:||Dr. V Mohan| |Date Deposited:||24 Aug 2010 10:23| |Last Modified:||09 Sep 2015 15:18| Actions (login required)
<urn:uuid:75281750-9018-483f-b59c-7c3922efdbd2>
CC-MAIN-2016-26
http://eprints.cmfri.org.in/2628/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396887.54/warc/CC-MAIN-20160624154956-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.834047
336
3.015625
3
I2L n. See integrated injection logic. I2O n. Short for Intelligent Input/Output. A specification for I/O device driver architecture that is independent of both the device being controlled and the host operating system. See also driver, input/output device. i386 n. A family of 32-bit microprocessors developed by Intel. The i386 was introduced in 1985. See also 80386DX. i486 n. A family of 32-bit microprocessors developed by Intel that extended and built upon the capabilities of the i386. The i486 was introduced in 1989. See also i486DX. i486DX n. An Intel microprocessor introduced in 1989. In addition to the features of the 80386 (32-bit registers, 32-bit data bus, and 32-bit addressing), the i486DX has a built-in cache controller, a built-in floating-point coprocessor, provisions for multiprocessing, and a pipelined execution scheme. Also called: 486, 80486. See also pipelining (definition 1). i486DX2 n. An Intel microprocessor introduced in 1992 as an upgrade to certain i486DX processors. The i486DX2 processes data and instructions at twice the system clock frequency. The increased operating speed leads to the generation of much more heat than in an i486DX, so a heat sink is often installed on the chip. Also called: 486DX, 80486. See also heat sink, i486DX, microprocessor. Compare OverDrive. i486SL n. A low-power-consumption version of Intel’s i486DX microprocessor designed primarily for laptop computers. The i486SL operates at a voltage of 3.3 volts rather than 5 volts, can shadow memory, and has a System Management Mode (SMM) in which the microprocessor can slow or halt some system components when the system is not performing CPU-intensive tasks, thus prolonging battery life. See also i486DX, shadow memory. i486SX n. An Intel microprocessor introduced in 1991 as a lower-cost alternative to the i486DX. It runs at slower clock speeds and has no floating-point processor. Also called: 486, 80486. See also 80386DX, 80386SX. Compare i486DX. IA-64 n. Short for Intel Architecture 64. Intel’s 64-bit microprocessor architecture based on EPIC (Explicitly Parallel Instruction Computing) technology. IA-64 is the foundation for the 64-bit Merced chip, as well as future chips to be based on the same architecture. Unlike architectures based on the sequential execution of instructions, IA-64 is designed to implement the parallel execution defined by EPIC technology. It also provides for numerous registers (128 general registers for integer and multimedia operations and 128 floating-point registers) and for grouping instructions in threes as 128-bit bundles. IA-64 architecture also features inherent scalability and compatibility with 32-bit software. See also EPIC, Merced. IAB n. See Internet Architecture Board. IAC n. Acronym for Information Analysis Center. One of several organizations chartered by the U.S. Department of Defense to facilitate the use of existing scientific and technical information. IACs establish and maintain comprehensive knowledge bases, including historical, technical, and scientific data, and also develop and maintain analytical tools and techniques for their use. IANA n. Acronym for Internet Assigned Numbers Authority. The organization historically responsible for assigning IP (Internet Protocol) addresses and overseeing technical parameters, such as protocol numbers and port numbers, related to the Internet protocol suite. Under the direction of the late Dr. Jon Postel, IANA operated as an arm of the Internet Architecture Board (IAB) of the Internet Society (ISOC) under contract with the U.S. government. However, given the international nature of the Internet, IANA’s functions, along with the domain name administration handled by U.S.-based Network Solutions, Inc. (NSI), were privatized in 1998 and turned over to a new, nonprofit organization known as ICANN (Internet Corporation for Assigned Names and Numbers). See also ICANN, NSI. I-beam n. A mouse cursor used by many applications, such as word processors, when in text-editing mode. The I-beam cursor indicates sections of the document where text can be inserted, deleted, changed, or moved. The cursor is named for its I shape. Also called: I-beam pointer. See also cursor (definition 3), mouse. I-beam pointer n. See I-beam. IBG n. Acronym for inter block gap. See inter-record gap. IBM AT n. A class of personal computers introduced in 1984 and conforming to IBM’s PC/AT (Advanced Technology) specification. The first AT was based on the Intel 80286 processor and dramatically outperformed its predecessor, the XT, in speed. See also 80286. IBM PC n. Short for IBM Personal Computer. A class of personal computers introduced in 1981 and conforming to IBM’s PC specification. The first PC was based on the Intel 8088 processor. For a number of years, the IBM PC was the de facto standard in the computing industry for PCs, and clones, or PCs that conformed to the IBM specification, have been called PC-compatible. See also PC-compatible, Wintel. IBM PC/XT n. A class of personal computers released by IBM in 1983. XT, Short for eXtended Technology. enabled users to add a wider range of peripherals to their machines than was possible with the original IBM PC. Equipped with a 10-megabyte hard disk drive and one or two 51/4-inch floppy drives, the PC/XT was expandable to 256K of RAM on the motherboard and was loaded with MS-DOS v2.1, which supported directories and subdirectories. The popularity of this machine contributed to the production of what came to be known in the industry as “clones,” copies of its design by many manufacturers. See also IBM AT, IBM PC, XT. IBM PC-compatible adj. See PC-compatible. iBook n. A notebook computer introduced by Apple in July 1999. The iBook was intended as a portable version of the iMac and is easily distinguished by its rounded shape and the bright colors of its case. Initial iBook models were powered by a 300-MHz G3 (PowerPC 750) processor and had the capability for wireless networking. See also iMac, PowerPC 750. IC1 adj. Acronym for In Character. Used to refer to events going on within a role-playing game, such as MUD, as opposed to events in real life. It is also used in the context of online chat, e-mail, and newsgroup postings. See also MUD, role-playing game. IC2 n. See integrated circuit. ICANN n. Acronym for Internet Corporation for Assigned Names and Numbers. The private, nonprofit corporation to which the U.S. government in 1998 delegated authority for administering IP (Internet Protocol) addresses, domain names, root servers, and Internet-related technical matters, such as management of protocol parameters (port numbers, protocol numbers, and so on). The successor to IANA (IP address administration) and NSI (domain name registration), ICANN was created to internationalize and privatize Internet management and administration. See also IANA, NSI. I-CASE n. Acronym for Integrated Computer-Aided Software Engineering. Software that performs a wide variety of software engineering functions, such as program design, coding, and testing parts or all of the completed program. ICE n. 1. Acronym for Information and Content Exchange. A protocol based on XML (Extensible Markup Language) designed to automate the distribution of syndicated content over the World Wide Web. Based on the concept of content syndicators (distributors) and subscribers (receivers), ICE defines the responsibilities of the parties involved, as well as the format and means of exchanging content so that data can easily be transferred and reused. The protocol has been submitted to the World Wide Web Consortium by Adobe Systems, Inc., CNET, Microsoft, Sun Microsystems, and Vignette Corporation. It is intended to help in both publishing and inter-business exchanges of content. 2. Acronym for in circuit emulator. A chip used as a stand-in for a microprocessor or a microcontroller. An in-circuit emulator is used to test and debug logic circuits. 3. Acronym for Intrusion Countermeasure Electronics. A fictional type of security software, popularized by science fiction novelist William Gibson, that responds to intruders by attempting to kill them. The origin of the term is attributed to a USENET subscriber, Tom Maddox. 4. See Intelligent Concept Extraction. ICM n. See image color matching. ICMP n. Acronym for Internet Control Message Protocol. A network-layer (ISO/OSI level 3) Internet protocol that provides error correction and other information relevant to IP packet processing. For example, it can let the IP software on one machine inform another machine about an unreachable destination. See also communications protocol, IP, ISO/OSI reference model, packet (definition 1). icon n. 1. A small image displayed on the screen to represent an object that can be manipulated by the user. By serving as visual mnemonics and allowing the user to control certain computer actions without having to remember commands or type them at the keyboard, icons contribute significantly to the user-friendliness of graphical user interfaces and to PCs in general. See also graphical user interface. 2. A high-level programming language designed to process non-numerical data structures and character strings using a Pascal-like syntax. iconic interface n. A user interface that is based on icons rather than on typed commands. See also graphical user interface, icon. icon parade n. The sequence of icons that appears during the boot-up of a Macintosh computer. ICP n. Acronym for Internet Cache Protocol. A networking protocol used by cache servers to locate specific Web objects in neighboring caches. Typically implemented over UDP, ICP also can be used for cache selection. ICP was developed for the Harvest research project at the University of Southern California. It has been implemented in SQUID and other Web proxy caches. ICQ n. A downloadable software program developed by Mirabilis, and now owned by AOL Time-Warner Inc., that notifies Internet users when friends, family, or other selected users are also on line and allows them to communicate with one another in real time. Through ICQ, users can chat, send e-mail, exchange messages on message boards, and transfer URLs and files, as well as launch third-party programs, such as games, in which multiple people can participate. Users compile a list of other users with whom they want to communicate. All users must register with the ICQ server and have ICQ software on their computer. The name is a reference to the phrase “I seek you.” See also instant messaging. ICSA n. Acronym for International Computer Security Association. An education and information organization concerned with Internet security issues. Known as the NCSA (National Computer Security Association) until 1997, the ICSA provides security assurance systems and product certification; disseminates computer security information in white papers, books, pamphlets, videos, and other publications; organizes consortiums devoted to various security issues; and maintains a Web site that provides updated information on viruses and other computer security topics. Founded in 1987, the ICSA is currently located in Reston, VA. ID n. Acronym for intrusion detection. See IDS. IDE n. 1. Acronym for Integrated Device Electronics. A type of disk-drive interface in which the controller electronics reside on the drive itself, eliminating the need for a separate adapter card. The IDE interface is compatible with the controller used by IBM in the PC/AT computer but offers advantages such as look-ahead caching. 2. See integrated development environment. identifier n. Any text string used as a label, such as the name of a procedure or a variable in a program or the name attached to a hard disk or floppy disk. Compare descriptor. IDL n. Acronym for Interface Definition Language. In object-oriented programming, a language that lets a program or object written in one language communicate with another program written in an unknown language. An IDL is used to define interfaces between client and server programs. For example, an IDL can provide interfaces to remote CORBA objects. See also CORBA, MIDL, object-oriented programming. idle adj. 1. Operational but not in use. 2. Waiting for a command. idle character n. In communications, a control character transmitted when no other information is available or ready to be sent. See also SYN. idle interrupt n. An interrupt that occurs when a device or process becomes idle. idle state n. The condition in which a device is operating but is not being used. IDS n. Acronym for intrusion-detection system. A type of security management system for computers and networks that gathers and analyzes information from various areas within a computer or a network to identify possible security breaches, both inside and outside the organization. An IDS can detect a wide range of hostile attack signatures, generate alarms, and, in some cases, cause routers to terminate communications from hostile sources. Also called: intrusion detection. Compare firewall. IDSL n. Acronym for Internet digital subscriber line. A high-speed digital communications service that provides Internet access as fast as 1.1 Mbps (megabits per second) over standard telephone lines. IDSL uses a hybrid of ISDN and digital subscriber line technology. See also digital subscriber line, ISDN. IE n. Acronym for information engineering. A methodology for developing and maintaining information-processing systems, including computer systems and networks, within an organization. IEEE n. Acronym for Institute of Electrical and Electronics Engineers. A society of engineering and electronics professionals based in the United States but boasting membership from numerous other countries. The IEEE (pronounced “eye triple ee”) focuses on electrical, electronics, computer engineering, and science-related matters. IEEE 1284 n. The IEEE standard for high-speed signaling through a bidirectional parallel computer interface. A computer that is compliant with the IEEE 1284 standard can communicate through its parallel port in five modes: outbound data transfer to a printer or similar device (“Centronics” mode), inbound transfer 4 (nibble mode) or 8 (byte mode) bits at a time, bidirectional Enhanced Parallel Ports (EPP) used by storage devices and other nonprinter peripherals, and Enhanced Capabilities Ports (ECP) used for bidirectional communication with a printer. See also Centronics parallel interface, ECP, enhanced parallel port. IEEE 1394 n. A nonproprietary, high-speed, serial bus input/output standard. IEEE 1394 provides a means of connecting digital devices, including personal computers and consumer electronics hardware. It is platform-independent, scalable (expandable), and flexible in supporting peer-to-peer (roughly, device-to-device) connections. IEEE 1394 preserves data integrity by eliminating the need to convert digital signals into analog signals. Created for desktop networks by Apple Computer and later developed by the IEEE 1394 working group, it is considered a low-cost interface for devices such as digital cameras, camcorders, and multimedia devices and is seen as a means of integrating personal computers and home electronics equipment. FireWire is the proprietary implementation of the standard by Apple Computer. See also analog data, IEEE. IEEE 1394 connector n. A type of connector that enables you to connect and disconnect high-speed serial devices. An IEEE 1394 connector is usually on the back of your computer near the serial port or the parallel port. The IEEE 1394 bus is used primarily to connect high-end digital video and audio devices to your computer; however, some hard disks, printers, scanners, and DVD drives can also be connected to your computer using the IEEE 1394 connector. IEEE 1394 port n. A 4- or 6-pin port that supports the IEEE 1394 standard and can provide direct connections between digital consumer electronics and computers. See also IEEE 1394. IEEE 488 n. The electrical definition of the General-Purpose Interface Bus (GPIB), specifying the data and control lines and the voltage and current levels for the bus. See also General-Purpose Interface Bus. IEEE 696/S-100 n. The electrical definition of the S-100 bus, used in early personal computer systems that used microprocessors such as the 8080, Z-80, and 6800. The S‐100 bus, based on the architecture of the Altair 8800, was extremely popular with early computer enthusiasts because it permitted installation of a wide range of expansion boards. See also Altair 8800, S-100 bus. A series of networking specifications developed by the IEEE. The x following 802 is a placeholder for individual specifications. The IEEE 802.x specifications correspond to the physical and data-link layers of the ISO/OSI reference model, but they divide the data-link layer into two sublayers. The logical link control (LLC) sublayer applies to all IEEE 802.x specifications and covers station-to-station connections, generation of message frames, and error control. The media access control (MAC) sublayer, dealing with network access and collision detection, differs from one IEEE 802 standard to another. IEEE 802.3 is used for bus networks that use CSMA/CD, both broadband and baseband, and the baseband version is based on the Ethernet standard. IEEE 802.4 is used for bus networks that use token passing, and IEEE 802.5 is used for ring networks that use token passing (token ring networks). IEEE 802.6 is an emerging standard for metropolitan area networks, which transmit data, voice, and video over distances of more than 5 kilometers. IEEE 802.14 is designed for bidirectional transmission to and from cable television networks over optical fiber and coaxial cable through transmission of fixed-length ATM cells to support television, data, voice, and Internet access. See the illustration. See also bus network, ISO/OSI reference model, ring network, token passing, token ring network. IEEE 802.x. ISO/OSI reference model with IEEE 802 LLC and MAC layers shown. IEEE 802.x. ISO/OSI reference model with IEEE 802 LLC and MAC layers shown. IEEE 802.11 n. The Institute of Electrical and Electronics Engineers’ (IEEE) specifications for wireless networking. These specifications, which include 802.11, 802.11a, 802.11b, and 802.11g, allow computers, printers, and other devices to communicate over a wireless local area network (LAN). IEEE printer cable n. A cable used to connect a printer to a PC’s parallel port that adheres to the IEEE 1284. See also IEEE 1284. IEPG n. Acronym for Internet Engineering and Planning Group. A collaborative group of Internet service providers whose goal is to promote the Internet and coordinate technical efforts on it. IESG n. See Internet Engineering Steering Group. IETF n. Acronym for Internet Engineering Task Force. A worldwide organization of individuals interested in networking and the Internet. Managed by the IESG (Internet Engineering Steering Group), the IETF is charged with studying technical problems facing the Internet and proposing solutions to the Internet Architecture Board (IAB). The work of the IETF is carried out by various Working Groups that concentrate on specific topics, such as routing and security. The IETF is the publisher of the specifications that led to the TCP/IP protocol standard. See also Internet Engineering Steering Group. IFC n. See Internet Foundation Classes. .iff n. The file extension that identifies files in the IFF (Interchange File Format) format. IFF was most commonly used on the Amiga platform, where it constituted almost any kind of data. On other platforms, IFF is mostly used to store image and sound files. IFF n. Acronym for Interchange File Format. See .iff. IFIP n. Acronym for International Federation of Information Processing. An organization of societies, representing over 40 member nations, that serves information-processing professionals. The United States is represented by the Federation on Computing in the United States (FOCUS). See also AFIPS, FOCUS. IFS n. See Installable File System Manager. IF statement n. A control statement that executes a block of code if a Boolean expression evaluates to true. Most programming languages also support an ELSE clause, which specifies code that is to be executed only if the Boolean expression evaluates to false. See also conditional. IGES n. See Initial Graphics Exchange Specification. IGMP n. See Internet Group Membership Protocol. IGP n. See Interior Gateway Protocol. IGRP n. Acronym for Interior Gateway Routing Protocol. A protocol developed by Cisco Systems that allows coordination between the routing of a number of gateways. Goals of IGRP include stable routing in large networks, fast response to changes in network topology, and low overhead. See also communications protocol, gateway, topology. IIA n. See SIIA. IIL n. See integrated injection logic. IIOP n. Acronym for Internet Inter-ORB Protocol. A networking protocol that enables distributed programs written in different programming languages to communicate over the Internet. IIOP, a specialized mapping in the General Inter-ORB Protocol (GIOP) based on a client/server model, is a critical part of CORBA. See also CORBA. Compare DCOM. IIS n. See Internet Information Server. ILEC n. Acronym for Incumbent Local Exchange Carrier. A telephone company that provides local service to its customers. Compare CLEC. illegal adj. Not allowed, or leading to invalid results. For example, an illegal character in a word processing program would be one that the program cannot recognize; an illegal operation might be impossible for a program or system because of built-in constraints. Compare invalid. illuminance n. 1. The amount of light falling on, or illuminating, a surface area. 2. A measure of illumination (such as watts per square meter) used in reference to devices such as televisions and computer displays. Compare luminance. IM n. See instant messaging. A family of Apple Macintosh computers introduced in 1998. Designed for nontechnical users, the iMac has a case that contains both the CPU and the monitor and is available in several bright colors. The “i” in iMac stands for Internet; the iMac was designed to make setting up an Internet connection extremely simple. The first version of the iMac included a 266-MHz PowerPC processor, a 66-MHz system bus, a hard drive, a CD-ROM drive, and a 15-inch monitor, with a translucent blue case. Later iMacs came with faster processors and a choice of case colors. See the illustration. See also Macintosh. .image n. A file extension for a Macintosh Disk Image, a storage type often used on Apple’s FTP software download sites. image n. 1. A stored description of a graphic picture, either as a set of brightness and color values of pixels or as a set of instructions for reproducing the picture. See also bit map, pixel map. 2. A duplicate, copy, or representation of all or part of a hard or floppy disk, a section of memory or hard drive, a file, a program, or data. For example, a RAM disk can hold an image of all or part of a disk in main memory; a virtual RAM program can create an image of some portion of the computer’s main memory on disk. See also RAM disk. image-based rendering n. See immersive imaging. image color matching n. The process of image output correction to match the same colors that were scanned or input. image compression n. The use of a data compression technique on a graphical image. Uncompressed graphics files tend to use up large amounts of storage, so image compression is useful to conserve space. See also compressed file, data compression, video compression. image compression dialog component n. An application programming interface that sets parameters for compressing images and image sequences in QuickTime, a technology from Apple for creating, editing, publishing, and viewing multimedia content. The component displays a dialog box as a user interface, validates and stores the settings selected in the dialog box, and oversees the compression of the image or images based on the selected criteria. Image Compression Manager n. A major software component used in QuickTime, a technology from Apple for creating, editing, publishing, and viewing multimedia content. The Image Compression Manager is an interface that provides image-compression and image-decompression services to applications and other managers. Because the Image Compression Manager is independent of specific compression algorithms and drivers, it can present a common application interface for software-based compressors and hardware-based compressors and offer compression options so that it or its application can use the appropriate tool for a particular situation. See also QuickTime. image compressor component n. A software component used by the Image Compression Manager to compress image data in QuickTime, a technology from Apple for creating, editing, publishing, and viewing multimedia content. See also Image Compression Manager, QuickTime. image decompressor component n. A software component used by the Image Compression Manager to decompress image data in QuickTime, a technology from Apple for creating, editing, publishing, and viewing multimedia content. See also Image Compression Manager, QuickTime. image editing n. The process of changing or modifying a bitmapped image, usually with an image editor. image editor n. An application program that allows users to modify the appearance of a bitmapped image, such as a scanned photo, by using filters and other functions. Creation of new images is generally accomplished in a paint or drawing program. See also bitmapped graphics, filter (definition 4), paint program. image enhancement n. The process of improving the quality of a graphic image, either automatically by software or manually by a user through a paint or drawing program. See also anti-aliasing, image processing. image map n. An image that contains more than one hyperlink on a Web page. Clicking different parts of the image links the user to other resources on another part of the Web page or a different Web page or in a file. Often an image map, which can be a photograph, drawing, or a composite of several different drawings or photographs, is used as a map to the resources found on a particular Web site. Older Web browsers support only server-side image maps, which are executed on a Web server through CGI script. However, most newer Web browsers (Netscape Navigator 2.0 and higher and Internet Explorer 3.0 and higher) support client-side image maps, which are executed in a user’s Web browser. Also called: clickable maps. See also CGI script, hyperlink, Web page. image processing n. The analysis, manipulation, storage, and display of graphical images from sources such as photographs, drawings, and video. Image processing spans a sequence of three steps. The input step (image capture and digitizing) converts the differences in coloring and shading in the picture into binary values that a computer can process. The processing step can include image enhancement and data compression. The output step consists of the display or printing of the processed image. Image processing is used in such applications as television and film, medicine, satellite weather mapping, machine vision, and computer-based pattern recognition. See also image enhancement, video digitizer. image sensor n. A light-sensitive integrated circuit or group of integrated circuits used in scanners, digital cameras, and video cameras. imagesetter n. A typesetting device that can transfer camera-ready text and artwork from computer files directly onto paper or film. Imagesetters print at high resolution (commonly above 1000 dpi) and are usually PostScript-compatible. image transcoder component n. A component that transfers compressed images from one file format to another in QuickTime, a technology developed by Apple for creating, editing, publishing, and viewing multimedia content. imaginary number n. A number that must be expressed as the product of a real number and i, where i 2 = –1. The sum of an imaginary number and a real number is a complex number. Although imaginary numbers are not directly encountered in the universe (as in “1.544 i megabits per second”), some pairs of quantities, especially in electrical engineering, behave mathematically like the real and imaginary parts of complex numbers. Compare complex number, real number. imaging n. The processes involved in the capture, storage, display, and printing of graphical images. IMAP4 n. Acronym for Internet Message Access Protocol 4. The latest version of IMAP, a method for an e-mail program to gain access to e-mail and bulletin board messages stored on a mail server. Unlike POP3, a similar protocol, IMAP allows a user to retrieve messages efficiently from more than one computer. Compare POP3. IMC n. See Internet Mail Consortium. IMHO n. Acronym for in my humble opinion. IMHO, used in e-mail and in online forums, flags a statement that the writer wants to present as a personal opinion rather than as a statement of fact. See also IMO. Imitation Game n. See Turing test. immediate access n. See direct access, random access. immediate operand n. A data value, used in the execution of an assembly language instruction, that is contained in the instruction itself rather than pointed to by an address in the instruction. immediate printing n. A process in which text and printing commands are sent directly to the printer without being stored as a printing file and without the use of an intermediate page-composition procedure or a file containing printer setup commands. immersive imaging n. A method of presenting photographic images on a computer by using virtual reality techniques. A common immersive image technique puts the user in the center of the view. The user can pan 360 degrees within the image and can zoom in and out. Another technique puts an object in the center of the view and allows the user to rotate around the object to examine it from any perspective. Immersive imaging techniques can be used to provide virtual reality experiences without equipment such as a headpiece and goggles. Also called: image-based rendering. See also imaging, virtual reality. IMO n. Acronym for in my opinion. A shorthand phrase used often in e-mail and Internet news and discussion groups to indicate an author’s admission that a statement he or she has just made is a matter of judgment rather than fact. See also IMHO. impact printer n. A printer, such as a wire-pin dot-matrix printer or a daisy-wheel printer, that drives an inked ribbon mechanically against the paper to form marks. See also daisy-wheel printer, dot-matrix printer. Compare nonimpact printer. impedance n. Opposition to the flow of alternating current. Impedance has two aspects: resistance, which impedes both direct and alternating current and is always greater than zero; and reactance, which impedes alternating current only, varies with frequency, and can be positive or negative. See also resistance. implementor n. In role-playing games, the administrator, coder, or developer of the game. Also called: Imp. See also role-playing game. import vb. To bring information from one system or program into another. The system or program receiving the data must somehow support the internal format or structure of the data. Conventions such as the TIFF (Tagged Image File Format) and PICT formats (for graphics files) make importing easier. See also PICT, TIFF. Compare export. IMT-2000 n. See International Mobile Telecommunications for the Year 2000. inactive window n. In an environment capable of displaying multiple on-screen windows, any window other than the one currently being used for work. An inactive window can be partially or entirely hidden behind another window, and it remains inactive until the user selects it. Compare active window. in-band signaling n. Transmission within the voice or data-handling frequencies of a communication channel. in-betweening n. See tween. Inbox n. In many e-mail applications, the default mailbox where the program stores incoming messages. See also e-mail, mailbox. Compare Outbox. incident light n. The light that strikes a surface in computer graphics. See also illuminance. in-circuit emulator n. See ICE (definition 2). INCLUDE directive n. A statement within a source-code file that causes another source-code file to be read in at that spot, either during compilation or during execution. It enables a programmer to break up a program into smaller files and enables multiple programs to use the same files. inclusive OR n. See OR. increment1 n. A scalar or unit amount by which the value of an object such as a number, a pointer within an array, or a screen position designation is increased. Compare decrement1. increment2 vb. To increase a number by a given amount. For example, if a variable has the value 10 and is incremented successively by 2, it takes the values 12, 14, 16, 18, and so on. Compare decrement2. incumbent local exchange carrier n. See ILEC. indent1 n. 1. Displacement of the left or right edge of a block of text in relation to the margin or to other blocks of text. 2. Displacement of the beginning of the first line of a paragraph relative to the other lines in the paragraph. Compare hanging indent. indent2 vb. To displace the left or right edge of a text item, such as a block or a line, relative to the margin or to another text item. Indeo n. A codec technology developed by Intel for compressing digital video files. See also codec. Compare MPEG. independent content provider n. A business or organization that supplies information to an online information service, such as America Online, for resale to the information service’s customers. See also online information service. independent software vendor n. A third-party software developer; an individual or an organization that independently creates computer software. Acronym: ISV. index1 n. 1. A listing of keywords and associated data that point to the location of more comprehensive information, such as files and records on a disk or record keys in a database. 2. In programming, a scalar value that allows direct access into a multi-element data structure such as an array without the need for a sequential search through the collection of elements. See also array, element (definition 1), hash, list. index2 vb. 1. In data storage and retrieval, to create and use a list or table that contains reference information pointing to stored data. 2. In a database, to find data by using keys such as words or field names to locate records. 3. In indexed file storage, to find files stored on disk by using an index of file locations (addresses). 4. In programming and information processing, to locate information stored in a table by adding an offset amount, called the index, to the base address of the table. indexed address n. The location in memory of a particular item of data within a collection of items, such as an entry in a table. An indexed address is calculated by starting with a base address and adding to it a value stored in a register called an index register. indexed search n. A search for an item of data that uses an index to reduce the amount of time required. indexed sequential access method n. A scheme for decreasing the time necessary to locate a data record within a large database, given a key value that identifies the record. A smaller index file is used to store the keys along with pointers that locate the corresponding records in the large main database file. Given a key, first the index file is searched for the key and then the associated pointer is used to access the remaining data of the record in the main file. Acronym: ISAM. index hole n. The small, round hole near the large, round spindle opening at the center of a 5.25-inch floppy disk. The index hole marks the location of the first data sector, enabling a computer to synchronize its read/write operations with the disk’s rotation. Indexing Service Query Language n. A query language available in addition to SQL for the Indexing Service in Windows 2000. Formerly known as Index Server, its original function was to index the content of Internet Information Services (IIS) Web servers. Indexing Service now creates indexed catalogs for the contents and properties of both file systems and virtual Webs. index mark n. 1. A magnetic indicator signal placed on a soft-sectored disk during formatting to mark the logical start of each track. 2. A visual information locator, such as a line, on a microfiche. indicator n. A dial or light that displays information about the status of a device, such as a light connected to a disk drive that glows when the disk is being accessed. indirect address n. See relative address. inductance n. The ability to store energy in the form of a magnetic field. Any length of wire has some inductance, and coiling the wire, especially around a ferromagnetic core, increases the inductance. The unit of inductance is the henry. Compare capacitance, induction. induction n. The creation of a voltage or current in a material by means of electric or magnetic fields, as in the secondary winding of a transformer when exposed to the changing magnetic field caused by an alternating current in the primary winding. See also impedance. Compare inductance. A component designed to have a specific amount of inductance. An inductor passes direct current but impedes alternating current to a degree dependent on its frequency. An inductor usually consists of a length of wire coiled in a cylindrical or toroidal (doughnut-shaped) form, sometimes with a ferromagnetic core. See the illustration. Also called: Inductor. One of several kinds of inductors. Inductor. One of several kinds of inductors. Industry Standard Architecture n. See ISA. INET n. 1. Short for Internet. 2. An annual conference held by the Internet Society. .inf n. The file extension for device information files, those files containing scripts used to control hardware operations. infection n. The presence of a virus or Trojan horse in a computer system. See also Trojan horse, virus, worm. infer vb. To formulate a conclusion based on specific information, either by applying the rules of formal logic or by generalizing from a set of observations. For example, from the facts that canaries are birds and birds have feathers, one can infer (draw the inference) that canaries have feathers. inference engine n. The processing portion of an expert system. It matches input propositions with facts and rules contained in a knowledge base and then derives a conclusion, on which the expert system then acts. inference programming n. A method of programming (as in Prolog) in which programs yield results based on logical inference from a set of facts and rules. See also Prolog. infinite loop n. 1. A loop that, because of semantic or logic errors, can never terminate through normal means. 2. A loop that is intentionally written with no explicit termination condition but will terminate as a result of side effects or direct intervention. See also loop1 (definition 1), side effect. infix notation n. A notation, used for writing expressions, in which binary operators appear between their arguments, as in 2 + 4. Unary operators usually appear before their arguments, as in –1. See also operator precedence, postfix notation, prefix notation, unary operator. .info n. One of seven new top-level domain names approved in 2001 by the Internet Corporation for Assigned Names and Numbers (ICANN). Unlike the other new domain names, which focus on specific types of Web sites, .info is meant for unrestricted use. infobahn n. The Internet. Infobahn is a mixture of the terms information and Autobahn, a German highway known for the high speeds at which drivers can legally travel. Also called: Information Highway, Information Superhighway, the Net. infomediary n. A term created from the phrase information intermediary. A service provider that positions itself between buyers and sellers, collecting, organizing, and distributing focused information that improves the interaction of consumer and online business. information n. The meaning of data as it is intended to be interpreted by people. Data consists of facts, which become information when they are seen in context and convey meaning to people. Computers process data without any understanding of what the data represents. Information Analysis Center n. See IAC. Information and Content Exchange n. See ICE (definition 1). information appliance n. A specialized computer designed to perform a limited number of functions and, especially, to provide access to the Internet. Although devices such as electronic address books or appointment calendars might be considered information appliances, the term is more typically used for devices that are less expensive and less capable than a fully functional personal computer. Set-top boxes are a current example; other devices, envisioned for the future, would include network-aware microwaves, refrigerators, watches, and the like. Also called: appliance. information center n. 1. A large computer center and its associated offices; the hub of an information management and dispersal facility in an organization. 2. A specialized type of computer system dedicated to information retrieval and decision-support functions. The information in such a system is usually read-only and consists of data extracted or downloaded from other production systems. information engineering n. See IE (definition 1). information explosion n. 1. The current period in human history, in which the possession and dissemination of information has supplanted mechanization or industrialization as a driving force in society. 2. The rapid growth in the amount of information available today. Also called: information revolution. information hiding n. A design practice in which implementation details for both data structures and algorithms within a module or subroutine are hidden from routines using that module or subroutine, so as to ensure that those routines do not depend on some particular detail of the implementation. In theory, information hiding allows the module or subroutine to be changed without breaking the routines that use it. See also break, module, routine, subroutine. Information Highway or information highway n. See Information Superhighway. Information Industry Association n. See SIIA. information kiosk n. See kiosk. information management n. The process of defining, evaluating, safeguarding, and distributing data within an organization or a system. information packet n. See packet (definition 1). information processing n. The acquisition, storage, manipulation, and presentation of data, particularly by electronic means. information resource management n. The process of managing the resources for the collection, storage, and manipulation of data within an organization or system. information retrieval n. The process of finding, organizing, and displaying information, particularly by electronic means. information revolution n. See information explosion. information science n. The study of how information is collected, organized, handled, and communicated. See also information theory. Information Services n. The formal name for a company’s data processing department. Acronym: IS. Also called: Data Processing, Information Processing, Information Systems, Information Technology, Management Information Services, Management Information Systems. Information Superhighway n. The existing Internet and its general infrastructure, including private networks, online services, and so on. See also National Information Infrastructure. Information Systems n. See Information Services. Information Technology n. See Information Services. Information Technology Industry Council n. Trade organization of the information technology industry. The council promotes the interests of the information technology industry and compiles information on computers, software, telecommunications, business equipment, and other topics related to information technology. Acronym: ITIC. information theory n. A mathematical discipline founded in 1948 that deals with the characteristics and the transmission of information. Information theory was originally applied to communications engineering but has proved relevant to other fields, including computing. It focuses on such aspects of communication as amount of data, transmission rate, channel capacity, and accuracy of transmission, whether over cables or within society. information warehouse n. The total of an organization’s data resources on all computers. information warfare n. Attacks on the computer operations on which an enemy country’s economic life or safety depends. Possible examples of information warfare include crashing air traffic control systems or massively corrupting stock exchange records. Infoseek n. A Web search site that provides full-text results for user searches plus categorized lists of related sites. InfoSeek is powered by the Ultraseek search engine and searches Web pages, Usenet newsgroups, and FTP and Gopher sites. Having a frequency in the electromagnetic spectrum in the range just below that of red light. Objects radiate infrared in proportion to their temperature. Infrared radiation is traditionally divided into four somewhat arbitrary categories based on its wavelength. See the table. Acronym: 750–1500 nanometers (nm) 40,000 nm–1 millimeter (mm) Infrared Data Association n. See IrDA. infrared device n. A computer, or a computer peripheral such as a printer, that can communicate by using infrared light. See also infrared. infrared file transfer n. Wireless file transfer between a computer and another computer or device using infrared light. See also infrared. infrared network connection n. A direct or incoming network connection to a remote access server using an infrared port. See also infrared port. infrared port n. An optical port on a computer for interfacing with an infrared-capable device. Communication is achieved without physical connection through cables. Infrared ports can be found on some laptops, notebooks, and printers. See also cable, infrared, port. inherent error n. An error in assumptions, design, logic, algorithms, or any combination thereof that causes a program to work improperly, regardless of how well written it is. For example, a serial communications program that is written to use a parallel port contains an inherent error. See also logic, semantics (definition 1), syntax. inherit vb. To acquire the characteristics of another class, in object-oriented programming. The inherited characteristics may be enhanced, restricted, or modified. See also class. inheritance n. 1. The transfer of the characteristics of a class in object-oriented programming to other classes derived from it. For example, if “vegetable” is a class, the classes “legume” and “root” can be derived from it, and each will inherit the properties of the “vegetable” class: name, growing season, and so on. See also class, object-oriented programming. 2. The transfer of certain properties, such as open files, from a parent program or process to another program or process that the parent causes to run. See also child (definition 1). inheritance code n. A set of structural and procedural attributes belonging to an object that has been passed on to it by the class or object from which it was derived. See also object-oriented programming. inhibit vb. To prevent an occurrence. For example, to inhibit interrupts from an external device means to prevent the external device from sending any interrupts. .ini n. In MS-DOS and Windows 3.x, the file extension that identifies an initialization file, which contains user preferences and startup information about an application program. ini file n. Short for initialization file, a text file containing information about the initial configuration of Windows and Windows-based applications, such as default settings for fonts, margins, and line spacing. Two ini files, win.ini and system.ini, are required to run the Windows operating system through version 3.1. In later versions of Windows, ini files are replaced by a database known as the registry. In addition to Windows itself, many older applications create their own ini files. Because they are composed only of text, ini files can be edited in any text editor or word processor to change information about the application or user preferences. All initialization files bear the extension .ini. See also configuration, configuration file, registry, system.ini, win.ini. INIT n. On older Macintosh computers, a system extension that is loaded into memory at startup time. See also extension (definition 4). Compare cdev. Initial Graphics Exchange Specification n. A standard file format for computer graphics, supported by the American National Standards Institute (ANSI), that is particularly suitable for describing models created with computer-aided design (CAD) programs. It includes a wide variety of basic geometric forms (primitives) and, in keeping with CAD objectives, offers methods for describing and annotating drawings and engineering diagrams. Acronym: IGES. See also ANSI. initialization n. The process of assigning initial values to variables and data structures in a program. initialization file n. See ini file. initialization string n. A sequence of commands sent to a device, especially a modem, to configure it and prepare it for use. In the case of a modem, the initialization string consists of a string of characters. initialize vb. 1. To prepare a storage medium, such as a disk or a tape, for use. This may involve testing the medium’s surface, writing startup information, and setting up the file system’s index to storage locations. 2. To assign a beginning value to a variable. 3. To start up a computer. See also cold boot, startup. initializer n. An expression whose value is the first (initial) value of a variable. See also expression. initial program load n. The process of copying an operating system into memory when a system is booted. Acronym: IPL. See also boot, startup. initiator n. The device in a SCSI connection that issues commands. The device that receives the commands is the target. See also SCSI, target. ink cartridge n. A disposable module that contains ink and is typically used in an ink-jet printer. See also ink-jet printer. ink-jet printer or inkjet printer n. A nonimpact printer in which liquid ink is vibrated or heated into a mist and sprayed through tiny holes in the print head to form characters or graphics on the paper. Ink-jet printers are competitive with some laser printers in price and print quality if not in speed. However, the ink, which must be highly soluble to avoid clogging the nozzles in the print head, produces fuzzy-looking output on some papers and smears if touched or dampened shortly after printing. See also nonimpact printer, print head. inline adj. 1. In programming, referring to a function call replaced with an instance of the function’s body. Actual arguments are substituted for formal parameters. An inline function is usually done as a compile-time transformation to increase the efficiency of the program. Also called: unfold, unroll. 2. In HTML code, referring to graphics displayed along with HTML-formatted text. Inline images placed in the line of HTML text use the tag <IMG>. Text within an inline image can be aligned to the top, bottom, or middle of a specific image. inline code n. Assembly language or machine language instructions embedded within high-level source code. The form it takes varies considerably from compiler to compiler, if it is supported at all. inline discussion n. Discussion comments that are associated with a document as a whole or with a particular paragraph, image, or table of a document. In Web browsers, inline discussions are displayed in the body of the document; in word-processing programs, they are usually displayed in a separate discussion or comments pane. inline graphics n. Graphics files that are embedded in an HTML document or Web page and viewable by a Web browser or other program that recognizes HTML. By avoiding the need for separate file opening operations, inline graphics can speed the access and loading of an HTML document. Also called: inline image. inline image n. An image that is embedded within the text of a document. Inline images are common on Web pages. See also inline graphics. inline processing n. Operation on a segment of low-level program code, called inline code, to optimize execution speed or storage requirements. See also inline code. inline stylesheet n. A stylesheet included within an HTML document. Because an inline stylesheet is directly associated with an individual document, any changes made to that document’s appearance will not affect the appearance of other Web site documents. Compare linked stylesheet. inline subroutine n. A subroutine whose code is copied at each place in a program at which it is called, rather than kept in one place to which execution is transferred. Inline subroutines improve execution speed, but they also increase code size. Inline subroutines obey the same syntactical and semantic rules as ordinary subroutines. Inmarsat n. Acronym for International Maritime Satellite. Organization based in London, England, that operates satellites for international mobile telecommunications services in more than 80 nations. Inmarsat provides services for maritime, aviation, and land use. inner join n. An operator in relational algebra, often implemented in database management. The inner join produces a relation (table) that contains all possible ordered concatenations (joinings) of records from two existing tables that meet certain specified criteria on the data values. It is thus equivalent to a product followed by a select applied to the resulting table. Compare outer join. inoculate vb. To protect a program against virus infection by recording characteristic information about it. For example, checksums on the code can be recomputed and compared with the stored original checksums each time the program is run; if any have changed, the program file is corrupt and may be infected. See also checksum, virus. input1 n. Information entered into a computer or program for processing, as from a keyboard or from a file stored on a disk drive. input2 vb. To enter information into a computer for processing. input area n. See input buffer. input-bound adj. See input/output-bound. input buffer n. A portion of computer memory set aside for temporary storage of information arriving for processing. See also buffer1. input channel n. See input/output channel. input device n. A peripheral device whose purpose is to allow the user to provide input to a computer system. Examples of input devices are keyboards, mice, joysticks, and styluses. See also peripheral. input driver n. See device driver. input language n. 1. A language to be inputted into the system through the keyboard, a speech-to-text converter, or an Input Method Editor (IME). 2. In Microsoft Windows XP, a Regional and Language Options setting that specifies the combination of the language being entered and the keyboard layout, IME, speech-to-text converter, or other device being used to enter it. This setting was formerly known as input locale. Input Method Editor n. Programs used to enter the thousands of different characters in written Asian languages with a standard 101-key keyboard. An IME consists of both an engine that converts keystrokes into phonetic and ideograph characters and a dictionary of commonly used ideographic words. As the user enters keystrokes, the IME engine attempts to identify which character or characters the keystrokes should be converted into. Acronym: IME. input/output n. The complementary tasks of gathering data for a computer or a program to work with, and of making the results of the computer’s activities available to the user or to other computer processes. Gathering data is usually done with input devices such as the keyboard and the mouse, while the output is usually made available to the user via the display and the printer. Other data resources, such as disk files and communications ports for the computer, can serve as either input or output devices. Acronym: I/O. input/output area n. See input/output buffer. input/output-bound adj. Characterized by the need to spend lengthy amounts of time waiting for input and output of data that is processed much more rapidly. For example, if the processor is capable of making rapid changes to a large database stored on a disk faster than the drive mechanism can perform the read and write operations, the computer is input/output-bound. A computer may be just input-bound or just output-bound if only input or only output limits the speed at which the processor accepts and processes data. Also called: I/O-bound. input/output buffer n. A portion of computer memory reserved for temporary storage of incoming and outgoing data. Because input/output devices can often write to a buffer without intervention from the CPU, a program can continue execution while the buffer fills, thus speeding program execution. See also buffer1. input/output bus n. A hardware path used inside a computer for transferring information to and from the processor and various input and output devices. See also bus. input/output channel n. A hardware path from the CPU to the input/output bus. See also bus. input/output controller n. Circuitry that monitors operations and performs tasks related to receiving input and transferring output at an input or output device or port, thus providing the processor with a consistent means of communication (input/output interface) with the device and also freeing the processor’s time for other work. For example, when a read or write operation is performed on a disk, the drive’s controller carries out the high-speed, electronically sophisticated tasks involved in positioning the read-write heads, locating specific storage areas on the spinning disk, reading from and writing to the disk surface, and even checking for errors. Most controllers require software that enables the computer to receive and process the data the controller makes available. Also called: device controller, I/O controller. input/output device n. A piece of hardware that can be used both for providing data to a computer and for receiving data from it, depending on the current situation. A disk drive is an example of an input/output device. Some devices, such as a keyboard or a mouse, can be used only for input and are thus called input (input-only) devices. Other devices, such as printers, can be used only for output and are thus called output (output-only) devices. Most devices require installation of software routines called device drivers to enable the computer to transmit and receive data to and from them. input/output interface n. See input/output controller. input/output port n. See port. input/output processor n. Hardware designed to handle input and output operations to relieve the burden on the main processing unit. For example, a digital signal processor can perform time-intensive, complicated analysis and synthesis of sound patterns without CPU overhead. See also digital signal processor, front-end processor (definition 1). input/output statement n. A program instruction that causes data to be transferred between memory and an input or output device. input port n. See port. input stream n. A flow of information used in a program as a sequence of bytes that are associated with a particular task or destination. Input streams include series of characters read from the keyboard to memory and blocks of data read from disk files. Compare output stream. inquiry n. A request for information. See also query. INS n. See WINS. insertion point n. A blinking vertical bar on the screen, such as in graphical user interfaces, that marks the location at which inserted text will appear. See also cursor (definition 1). insertion sort n. A list-sorting algorithm that starts with a list that contains one item and builds an ever-larger sorted list by inserting the items to be sorted one at a time into their correct positions on that list. Insertion sorts are inefficient when used with arrays, because of constant shuffling of items, but are ideally suited for sorting linked lists. See also linked list, sort algorithm. Compare bubble sort, quicksort. Insert key n. A key on the keyboard, labeled “Insert” or “Ins,” whose usual function is to toggle a program’s editing setting between an insert mode and an overwrite mode, although it may perform different functions in different applications. Also called: Ins key. insert mode n. A mode of operation in which a character typed into a document or at a command line pushes subsequent existing characters farther to the right on the screen rather than overwriting them. Insert mode is the opposite of overwrite mode, in which new characters replace subsequent existing characters. The key or key combination used to change from one mode to the other varies among programs, but the Insert key is most often used. Compare overwrite mode. insider attack n. An attack on a network or system carried out by an individual associated with the hacked system. Insider attacks are typically the work of current or former employees of a company or organization who have knowledge of passwords and network vulnerabilities. Compare intruder attack. Ins key n. See Insert key. install vb. To set in place and prepare for operation. Operating systems and application programs commonly include a disk-based installation, or setup, program that does most of the work of preparing the program to work with the computer, printer, and other devices. Often such a program can check for devices attached to the system, request the user to choose from sets of options, create a place for the program on the hard disk, and modify system startup files as necessary. installable device driver n. A device driver that can be embedded within an operating system, usually in order to override an existing, less-functional service. Installable File System Manager n. In Windows 9x and Windows 2000, the part of the file system architecture responsible for arbitrating access to the different file system components. Acronym: IFS. installation program n. A program whose function is to install another program, either on a storage medium or in memory. An installation program, also called a setup program, might be used to guide a user through the often complex process of setting up an application for a particular combination of machine, printer, and monitor. Installer n. A program, provided with the Apple Macintosh operating system, that allows the user to install system upgrades and make bootable (system) disks. instance n. An object, in object-oriented programming, in relation to the class to which it belongs. For example, an object myList that belongs to a class List is an instance of the class List. See also class, instance variable, instantiate, object (definition 2). instance variable n. A variable associated with an instance of a class (an object). If a class defines a certain variable, each instance of the class has its own copy of that variable. See also class, instance, object (definition 2), object-oriented programming. instantiate vb. To create an instance of a class. See also class, instance, object (definition 2). instant messaging n. A service that alerts users when friends or colleagues are on line and allows them to communicate with each other in real time through private online chat areas. With instant messaging, a user creates a list of other users with whom he or she wishes to communicate; when a user from his or her list is on line, the service alerts the user and enables immediate contact with the other user. While instant messaging has primarily been a proprietary service offered by Internet service providers such as AOL and MSN, businesses are starting to employ instant messaging to increase employee efficiency and make expertise more readily available to employees. Institute of Electrical and Electronics Engineers n. See IEEE. instruction n. An action statement in any computer language, most often in machine or assembly language. Most programs consist of two types of statements: declarations and instructions. See also declaration, statement. instruction code n. See operation code. instruction counter n. See instruction register. instruction cycle n. The cycle in which a processor retrieves an instruction from memory, decodes it, and carries it out. The time required for an instruction cycle is the sum of the instruction (fetch) time and the execution (translate and execute) time and is measured by the number of clock ticks (pulses of a processor’s internal timer) consumed. instruction mix n. The assortment of types of instructions contained in a program, such as assignment instructions, mathematical instructions (floating-point or integer), control instructions, and indexing instructions. Knowledge of instruction mixes is important to designers of CPUs because it tells them which instructions should be shortened to yield the greatest speed, and to designers of benchmarks because it enables them to make the benchmarks relevant to real tasks. instruction pointer n. See program counter. instruction register n. A register in a central processing unit that holds the address of the next instruction to be executed. instruction set n. The set of machine instructions that a processor recognizes and can execute. See also assembler, microcode. instruction time n. The number of clock ticks (pulses of a computer’s internal timer) required to retrieve an instruction from memory. Instruction time is the first part of an instruction cycle; the second part is the execution (translate and execute) time. Also called: I-time. instruction word n. 1. The length of a machine language instruction. 2. A machine language instruction containing an operation code identifying the type of instruction, possibly one or more operands specifying data to be affected or its address, and possibly bits used for indexing or other purposes. See also assembler, machine code. insulator n. 1. Any material that is a very poor conductor of electricity, such as rubber, glass, or ceramic. Also called: nonconductor. Compare conductor, semiconductor. 2. A device used to separate elements of electrical circuits and prevent current from taking unwanted paths, such as the stacks of ceramic disks that suspend high-voltage power lines from transmission towers. integer n. 1. A positive or negative “whole” number, such as 37, –50, or 764. 2. A data type representing whole numbers. Calculations involving only integers are much faster than calculations involving floating-point numbers, so integers are widely used in programming for counting and numbering purposes. Integers can be signed (positive or negative) or unsigned (positive). They can also be described as long or short, depending on the number of bytes needed to store them. Short integers, stored in 2 bytes, cover a smaller range of numbers (for example, –32,768 through 32,767) than do long integers (for example, –2,147,483,648 through 2,147,483,647), which are stored in 4 bytes. Also called: integral number. See also floating-point notation. integral modem n. A modem that is built into a computer, as opposed to an internal modem, which is a modem on an expansion card that can be removed. See also external modem, internal modem, modem. integral number n. See integer (definition 2). A device consisting of a number of connected circuit elements, such as transistors and resistors, fabricated on a single chip of silicon crystal or other semiconductor material. Integrated circuits are categorized by the number of elements they contain. See the table. Acronym: IC. Also called: See also central processing unit. small-scale integration (SSI) in the 10s medium-scale integration (MSI) in the 100s large-scale integration (LSI) in the 1000s very-large-scale integration (VLSI) in the 100,000s ultra-large-scale integration (ULSI) 1,000,000 or more integrated development environment n. A set of integrated tools for developing software. The tools are generally run from one user interface and consist of a compiler, an editor, and a debugger, among others. Acronym: IDE. Integrated Device Electronics n. See IDE (definition 1). integrated injection logic n. A type of circuit design that uses both NPN and PNP transistors and does not require other components, such as resistors. Such circuits are moderately fast, consume little power, and can be manufactured in very small sizes. Acronym: I2L, IIL. Also called: merged transistor logic. See also NPN transistor, PNP transistor. Integrated Services Digital Network n. See ISDN. Integrated Services LAN n. See isochronous network. integrated software n. A program that combines several applications, such as word processing, database management, and spreadsheets, in a single package. Such software is “integrated” in two ways: it can transfer data from one of its applications to another, helping users coordinate tasks and merge information created with the different software tools; and it provides the user with a consistent interface for choosing commands, managing files, and otherwise interacting with the programs so that the user will not have to master several, often very different, programs. The applications in an integrated software package are often not, however, designed to offer as much capability as single applications, nor does integrated software necessarily include all the applications needed in a particular environment. integration n. 1. In computing, the combining of different activities, programs, or hardware components into a functional unit. See also integral modem, integrated software, ISDN. 2. In electronics, the process of packing multiple electronic circuit elements on a single chip. See also integrated circuit. 3. In mathematics, specifically calculus, a procedure performed on an equation and related to finding the area under a given curve or the volume within a given shape. A circuit whose output represents the integral, with respect to time, of the input signal—that is, its total accumulated value over time. See the illustration. Integrator. An example of the action of an integrator circuit. Integrator. An example of the action of an integrator circuit. integrity n. The completeness and accuracy of data stored in a computer, especially after it has been manipulated in some way. See also data integrity. Intel Architecture 64 n. See IA-64. intellectual property n. Content of the human intellect deemed to be unique and original and to have marketplace value—and thus to warrant protection under the law. Intellectual property includes but is not limited to ideas; inventions; literary works; chemical, business, or computer processes; and company or product names and logos. Intellectual property protections fall into four categories: copyright (for literary works, art, and music), trademarks (for company and product names and logos), patents (for inventions and processes), and trade secrets (for recipes, code, and processes). Concern over defining and protecting intellectual property in cyberspace has brought this area of the law under intense scrutiny. intelligence n. 1. The ability of hardware to process information. A device without intelligence is said to be dumb; for example, a dumb terminal connected to a computer can receive input and display output but cannot process information independently. 2. The ability of a program to monitor its environment and initiate appropriate actions to achieve a desired state. For example, a program waiting for data to be read from disk might switch to another task in the meantime. 3. The ability of a program to simulate human thought. See also artificial intelligence. 4. The ability of a machine such as a robot to respond appropriately to changing stimuli (input). intelligent adj. Of, pertaining to, or characteristic of a device partially or totally controlled by one or more processors integral to the device. intelligent agent n. See agent (definition 2). intelligent cable n. A cable that incorporates circuitry to do more than simply pass signals from one end of the cable to the other, such as to determine the characteristics of the connector into which it is plugged. Also called: smart cable. Intelligent Concept Extraction n. A technology owned by Excite, Inc., for searching indexed databases to retrieve documents from the World Wide Web. Intelligent Concept Extraction is like other search technologies in being able to locate indexed Web documents related to one or more key words entered by the user. Based on proprietary search technology, however, it also matches documents conceptually by finding relevant information even if the document found does not contain the key word or words specified by the user. Thus, the list of documents found by Intelligent Concept Extraction can include both documents containing the specified search term and those containing alternative words related to the search term. Acronym: ICE. intelligent database n. A database that manipulates stored information in a way that people find logical, natural, and easy to use. An intelligent database conducts searches relying not only on traditional data-finding routines but also on predetermined rules governing associations, relationships, and even inferences regarding the data. See also database. Intelligent hub n. A type of hub that, in addition to transmitting signals, has built-in capability for other network chores, such as monitoring or reporting on network status. Intelligent hubs are used in different types of networks, including ARCnet and 10Base-T Ethernet. See also hub. Intelligent Input/Output n. See I2O. intelligent terminal n. A terminal with its own memory, processor, and firmware that can perform certain functions independently of its host computer, most often the rerouting of incoming data to a printer or video screen. Intelligent Transportation Infrastructure n. A system of automated urban and suburban highway and mass transit control and management services proposed in 1996 by U.S. Secretary of Transportation Federico Peña. Acronym: ITI. IntelliSense n. A Microsoft technology used in various Microsoft products, including Internet Explorer, Visual Basic, Visual Basic C++, and Office that is designed to help users perform routine tasks. In Visual Basic, for example, information such as the properties and methods of an object is displayed as the developer types the name of the object in the Visual Basic code window. Intensity Red Green Blue n. See IRGB. interactive adj. Characterized by conversational exchange of input and output, as when a user enters a question or command and the system immediately responds. The interactivity of microcomputers is one of the features that makes them approachable and easy to use. interactive fiction n. A type of computer game in which the user participates in a story by giving commands to the system. The commands given by the user determine, to some extent, the events that occur during the story. Typically the story involves a goal that must be achieved, and the puzzle is to determine the correct sequence of actions that will lead to the accomplishment of that goal. See also adventure game. interactive graphics n. A form of user interface in which the user can change and control graphic displays, often with the help of a pointing device such as a mouse or a joystick. Interactive graphics interfaces occur in a range of computer products, from games to computer-aided design (CAD) systems. interactive processing n. Processing that involves the more or less continuous participation of the user. Such a command/response mode is characteristic of microcomputers. Compare batch processing (definition 2). interactive program n. A program that exchanges output and input with the user, who typically views a display of some sort and uses an input device, such as a keyboard, mouse, or joystick, to provide responses to the program. A computer game is an interactive program. Compare batch program. interactive services n. See BISDN. interactive session n. A processing session in which the user can more or less continuously intervene and control the activities of the computer. Compare batch processing (definition 2). interactive television n. A video technology in which a viewer interacts with the television programming. Typical uses of interactive television include Internet access, video on demand, and video conferencing. See also video conferencing. interactive TV n. See iTV. interactive video n. The use of computer-controlled video, in the form of a CD-ROM or videodisc, for interactive education or entertainment. See also CD-ROM, interactive, interactive television, videodisc. interactive voice response n. A computer that operates through the telephone system, in which input commands and data are transmitted to the computer as spoken words and numbers or tones and dial pulses generated by a telephone instrument; and output instructions and data are received from the computer as prerecorded or synthesized speech. For example, a dial-in service that provides airline flight schedules when you press certain key codes on your telephone is an interactive voice response system. Also called: IVR. Interactive voice system n. See interactive voice response. interapplication communication n. The process of one program sending messages to another program. For example, some e-mail programs allow users to click on a URL within the message. After the user clicks on the URL, browser software will automatically launch and access the URL. interblock gap n. See inter-record gap. Interchange File Format n. See .iff. Interchange Format n. See Rich Text Format. interconnect n. 1. See System Area Network. 2. An electrical or mechanical connection. Interconnect is the physical connection and communication between two components in a computer system. interface n. 1. The point at which a connection is made between two elements so that they can work with each other or exchange information. 2. Software that enables a program to work with the user (the user interface, which can be a command-line interface, menu-driven interface, or a graphical user interface), with another program such as the operating system, or with the computer’s hardware. See also application programming interface, graphical user interface. 3. A card, plug, or other device that connects pieces of hardware with the computer so that information can be moved from place to place. For example, standardized interfaces such as RS-232-C standard and SCSI enable communications between computers and printers or disks. See also RS-232-C standard, SCSI. interface adapter n. See network adapter. interface card n. See adapter. Interface Definition Language n. See IDL. interference n. 1. Noise or other external signals that affect the performance of a communications channel. 2. Electromagnetic signals that can disturb radio or television reception. The signals can be generated naturally, as in lightning, or by electronic devices, such as computers. Interior Gateway Protocol n. A protocol used for distributing routing information among routers (gateways) in an autonomous network—that is, a network under the control of one administrative body. The two most often used interior gateway protocols are RIP (Routing Information Protocol) and OSPF (Open Shortest Path First). Acronym: IGP. See also autonomous system, OSPF, RIP. Compare exterior gateway protocol. Interior Gateway Routing Protocol n. See IGRP. Interix n. A software application from Microsoft that allows businesses to run existing UNIX-based legacy applications while adding applications based on the Microsoft Windows operating system. Interix serves as a single enterprise platform from which to run UNIX-based, Internet-based, and Windows-based applications. interlaced adj. Pertaining to a display method on raster-scan monitors in which the electron beam refreshes or updates all odd-numbered scan lines in one vertical sweep of the screen and all even-numbered scan lines in the next sweep. Compare noninterlaced. interlaced GIF n. A picture in GIF format that is gradually displayed in a Web browser, showing increasingly detailed versions of the picture until the entire file has finished downloading. Users of slower modems have a perceived shorter wait time for the image to appear, and they can sometimes get enough information about the image to decide whether to proceed with the download or move on. Users with faster connections will notice little difference in effect between an interlaced GIF and a noninterlaced GIF. interlace scanning n. A display technique designed to reduce flicker and distortions in television transmissions; also used with some raster-scan monitors. In interlace scanning the electron beam in the television or monitor refreshes alternate sets of scan lines in successive top-to-bottom sweeps, refreshing all even lines on one pass, and all odd lines on the other. Because of the screen phosphor’s ability to maintain an image for a short time before fading and the tendency of the human eye to average or blend subtle differences in light intensity, the human viewer sees a complete display, but the amount of information carried by the display signal and the number of lines that must be displayed per sweep are halved. Interlaced images are not as clear as those produced by the progressive scanning typical of newer computer monitors. Interlace scanning is, however, the standard method of displaying analog broadcast television images. Also called: interlacing. Compare progressive scanning. interlacing n. See interlace scanning. interleave vb. To arrange the sectors on a hard disk in such a way that after one sector is read, the next sector in numeric sequence will arrive at the head when the computer is ready to accept it rather than before, which would make the computer wait a whole revolution of the platter for the sector to come back. Interleaving is set by the format utility that initializes a disk for use with a given computer. interleaved memory n. A method of organizing the addresses in RAM memory in order to reduce wait states. In interleaved memory, adjacent locations are stored in different rows of chips so that after accessing a byte, the processor does not have to wait an entire memory cycle before accessing the next byte. See also access time (definition 1), wait state. interlock vb. To prevent a device from acting while the current operation is in progress. intermediate language n. 1. A computer language used as an intermediate step between the original source language, usually a high-level language, and the target language, usually machine code. Some high-level compilers use assembly language as an intermediate language. See also compiler (definition 2), object code. 2. See Microsoft intermediate language. intermittent adj. Pertaining to something, such as a signal or connection, that is not unbroken but occurs at periodic or occasional intervals. intermittent error n. An error that recurs at unpredictable times. internal clock n. See clock/calendar. internal command n. A routine that is loaded into memory along with the operating system and resides there for as long as the computer is on. Compare external command. internal font n. A font that is already loaded in a printer’s memory (ROM) when the printer is shipped. Compare downloadable font, font cartridge. internal interrupt n. An interrupt generated by the processor itself in response to certain predefined situations, such as an attempt to divide by zero or an arithmetic value exceeding the number of bits allowed for it. See also interrupt. Compare external interrupt. internal memory n. See primary storage. internal modem n. A modem constructed on an expansion card to be installed in one of the expansion slots inside a computer. Compare external modem, integral modem. internal schema n. A view of information about the physical files composing a database, including file names, file locations, accessing methodology, and actual or potential data derivations, in a database model such as that described by ANSI/X3/SPARC, that supports a three-schema architecture. The internal schema corresponds to the schema in systems based on CODASYL/DBTG. In a distributed database, there may be a different internal schema at each location. See also conceptual schema, schema. internal sort n. 1. A sorting operation that takes place on files completely or largely held in memory rather than on disk during the process. 2. A sorting procedure that produces sorted subgroups of records that will be subsequently merged into one list. International Computer Security Association n. See ICSA. International Federation of Information Processing n. See IFIP. International Maritime Satellite n. See Inmarsat. International Mobile Telecommunications for the Year 2000 n. Specifications set forth by the International Telecommunications Union (ITU) to establish third-generation wireless telecommunication network architecture. The specifications include faster data transmission speeds and improved voice quality. Acronym: IMT-2000. International Organization for Standardization n. See ISO. International Telecommunication Union n. See ITU. International Telecommunication Union-Telecommunication Standardization Sector n. See ITU-T. International Telegraph and Telephone Consultative Committee n. English-language form of the name for the Comité Consultatif International Télégraphique et Téléphonique, a standards organization that became part of the International Telecommunication Union in 1992. See ITU-T. See also CCITT. Internaut n. See cybernaut. internet n. Short for internetwork. A set of computer networks that may be dissimilar and are joined together by means of gateways that handle data transfer and conversion of messages from the sending networks’ protocols to those of the receiving network. Internet n. The worldwide collection of networks and gateways that use the TCP/IP suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, government, educational, and other computer systems, that route data and messages. One or more Internet nodes can go off line without endangering the Internet as a whole or causing communications on the Internet to stop, because no single computer or network controls it. The genesis of the Internet was a decentralized network called ARPANET created by the U.S. Department of Defense in 1969 to facilitate communications in the event of a nuclear attack. Eventually other networks, including BITNET, Usenet, UUCP, and NSFnet, were connected to ARPANET. Currently the Internet offers a range of services to users, such as FTP, e-mail, the World Wide Web, Usenet news, Gopher, IRC, telnet, and others. Also called: the Net. See also BITNET, FTP1 (definition 1), Gopher, IRC, NSFnet, telnet, Usenet, UUCP, World Wide Web. Internet2 n. A computer-network development project launched in 1996 by a collaborative group of 120 universities under the auspices of the University Corporation for Advanced Internet Development (UCAID). The consortium is now being led by over 190 universities working with industry and government. The goal of Internet2, whose high-speed, fiberoptic backbone was brought on line in early 1999, is the development of advanced Internet technologies and applications for use in research and education at the university level. Though not open for public use, Internet2 and the technologies and applications developed by its members are intended to eventually benefit users of the commercial Internet as well. Some of the new technologies Internet2 and its members are developing and testing include IPv6, multicasting, and quality of service (QoS). Internet2 and the Next Generation Internet (NGI) are complementary initiatives. Compare Internet, Next Generation Internet. Internet access n. 1. The capability of a user to connect to the Internet. This is generally accomplished through one of two ways. The first is through a dialing up of an Internet service provider or an online information services provider via a modem connected to the user’s computer. This method is the one used by the majority of home computer users. The second way is through a dedicated line, such as a T1 carrier, that is connected to a local area network, to which, in turn, the user’s computer is connected. The dedicated line solution is used by larger organizations, such as corporations, which either have their own node on the Internet or connect to an Internet service provider that is a node. A third way that is emerging is for users to use set-top boxes with their TVs. Generally, however, this will give a user access only to documents on the World Wide Web. See also dedicated line (definition 1), ISP, LAN, modem, node (definition 2), set-top box. 2. The capability of an online information service to exchange data with the Internet, such as e-mail, or to offer Internet services to users, such as newsgroups, FTP, and the World Wide Web. Most online information services offer Internet access to their users. See also FTP1 (definition 1), online information service. Internet access device n. A communications and signal-routing mechanism, possibly incorporating usage tracking and billing features, for use in connecting multiple remote users to the Internet. Internet access provider n. See ISP. Internet account n. A generic term for a registered username at an Internet Service Provider (ISP). An Internet account is accessed via username and password. Services such as dial-in PPP Internet access and e-mail are provided by ISPs to Internet account owners. Internet address n. See domain name address, e-mail address, IP address. Internet appliance n. 1. See set-top box. 2. See server appliance. Internet Architecture Board n. The body of the Internet Society (ISOC) responsible for overall architectural considerations regarding the Internet. The IAB also serves to adjudicate disputes in the standards process. Acronym: IAB. See also Internet Society. Internet Assigned Numbers Authority n. See IANA, ICANN. Internet backbone n. One of several high-speed networks connecting many local and regional networks, with at least one connection point where it exchanges packets with other Internet backbones. Historically, the NSFnet (predecessor to the modern Internet) was the backbone to the entire Internet in the United States. This backbone linked the supercomputing centers that the National Science Foundation (NSF) runs. Today, different providers have their own backbones so that the backbone for the supercomputing centers is independent of backbones for commercial Internet providers such as MCI and Sprint. See also backbone. Internet broadcasting n. Broadcasting of audio, or audio plus video, signals across the Internet. Internet broadcasting includes conventional over-the-air broadcast stations that transmit their signals into the Internet as well as Internet-only stations. Listeners use audio Internet software, such as RealAudio. One method of Internet broadcasting is MBONE. See also MBONE, RealAudio. Internet Cache Protocol n. See ICP. Internet Control Message Protocol n. See ICMP. Internet Corporation for Assigned Names and Numbers n. See ICANN. Internet cramming n. See Web cramming. Internet Directory n. 1. Online database of sites organized by category where you can search for files and information by subject, keyword, or other criteria. 2. Storage place for information such as names, Web addresses, organizations, departments, countries, and locations. Typically, Internet Directories are used to look up e-mail addresses that are not in a local address book or a corporate-wide directory. Internet Draft n. A document produced by the Internet Engineering Task Force (IETF) for purposes of discussing a possible change in standards that govern the Internet. An Internet Draft is subject to revision or replacement at any time; if not replaced or revised, the Internet Draft is valid for no more than six months. An Internet Draft, if accepted, may be developed into an RFC. See also IETF, RFC. Internet Engineering and Planning Group n. See IEPG. Internet Engineering Steering Group n. The group within the Internet Society (ISOC) that, along with the Internet Architecture Board (IAB), reviews the standards proposed by the Internet Engineering Task Force (IETF). Acronym: IESG. Internet Engineering Task Force n. See IETF. Internet Explorer n. Microsoft’s Web browsing software. Introduced in October 1995, the latest versions of Internet Explorer include many features that allow you to customize your experience on the Web. Internet Explorer is also available for the Macintosh and UNIX platforms. See also ActiveX control, Java applet, Web browser. Internet Foundation Classes n. A Java class library developed by Netscape to facilitate the creation of full-feature, mission-critical Java applications. Internet Foundation Classes (IFC) comprises user-interface objects and frameworks intended to extend Java’s Abstract Window Toolkit (AWT) and includes a multifont text editor; essential application controls; and drag-and-drop, drawing/event, windowing, animation, object persistence, single-thread, and localization frameworks. See also Abstract Window Toolkit, Application Foundation Classes, Java Foundation Classes, Microsoft Foundation Classes. Internet gateway n. A device that provides the connection between the Internet backbone and another network, such as a LAN (local area network). Usually the device is a computer dedicated to the task or a router. The gateway generally performs protocol conversion between the Internet backbone and the network, data translation or conversion, and message handling. A gateway is considered a node on the Internet. See also gateway, Internet backbone, node (definition 2), router. Internet Group Membership Protocol n. A protocol used by IP hosts to report their host group memberships to any immediately neighboring multicast routers. Acronym: IGMP. Internet home n. See smart home. Internet Information Server n. Microsoft’s brand of Web server software, utilizing HTTP (Hypertext Transfer Protocol) to deliver World Wide Web documents. It incorporates various functions for security, allows for CGI programs, and also provides for Gopher and FTP services. Internet Inter-ORB Protocol n. See IIOP. Internet Mail Consortium n. An international membership organization of businesses and vendors involved in activities related to e-mail transmission over the Internet. The goals of the Internet Mail Consortium are related to the promotion and expansion of Internet mail. The group’s interests range from making Internet mail easier for new users to advancing new mail technologies and expanding the role played by Internet mail into areas such as electronic commerce and entertainment. For example, the Internet Mail Consortium supports two companion specifications, vCalendar and vCard, designed to facilitate electronic exchange of scheduling and personal information. Acronym: IMC. Internet Naming Service n. See WINS. Internet Printing Protocol n. A specification for transmission of documents to printers through the Internet. Development of the Internet Printing Protocol (IPP) was proposed in 1997 by members of the Internet Engineering Task Force (IETF). Intended to provide a standard protocol for Internet-based printing, IPP covers both printing and printer management (printer status, job cancellation, and so on). It is applicable to print servers and to network-capable printers. Internet Protocol n. See IP. Internet Protocol address n. See IP address. Internet Protocol next generation n. See IPng. Internet Protocol number n. See IP address. Internet Protocol Security n. See IPSec. Internet Protocol version 4 n. See IPv4. Internet Protocol version 6 n. See IPv6. Internet reference model n. See TCP/IP reference model. Internet Relay Chat n. See IRC. Internet Research Steering Group n. The governing body of the Internet Research Task Force (IRTF). Acronym: IRSG. Internet Research Task Force n. A volunteer organization that is an arm of the Internet Society (ISOC) focused on making long-term recommendations concerning the Internet to the Internet Architecture Board (IAB). Acronym: IRTF. See also Internet Society. Internet robot n. See spider. Internet security n. A broad topic dealing with all aspects of data authentication, privacy, integrity, and verification for transactions over the Internet. For example, credit card purchases made via a World Wide Web browser require attention to Internet security issues to ensure that the credit card number is not intercepted by an intruder or copied from the server where the number is stored, and to verify that the credit card number is actually sent by the person who claims to be sending it. Internet Security and Acceleration Server n. A software application from Microsoft Corporation to increase the security and performance of Internet access for businesses. Internet Security and Acceleration Server provides an enterprise firewall and high-performance Web cache server to securely manage the flow of information from the Internet through the enterprise’s internal network. Acronym: ISA Server. Internet Server Application Programming Interface n. See ISAPI. Internet service provider n. See ISP. Internet Society n. An international, nonprofit organization based in Reston, Virginia, comprising individuals, companies, foundations, and government agencies, that promotes the use, maintenance, and development of the Internet. The Internet Architecture Board (IAB) is a body within the Internet Society. In addition, the Internet Society publishes the Internet Society News and produces the annual INET conference. Acronym: ISOC. See also INET, Internet Architecture Board. Internet Software Consortium n. A nonprofit organization that develops software that is available for free, via the World Wide Web or FTP, and engages in development of Internet standards such as the Dynamic Host Configuration Protocol (DHCP). Acronym: ISC. See also DHCP. Internet SSE n. See SSE. Internet Streaming Media Alliance n. See ISMA. Internet synchronization n. 1. The process of synchronizing data between computing and communication devices that are connected to the Internet. 2. A feature in Microsoft Jet and Microsoft Access that allows replicated information to be synchronized in an environment in which an Internet server is configured with Microsoft Replication Manager, a tool included with Microsoft Office 2000 Developer. Internet Talk Radio n. Audio programs similar to radio broadcasts but distributed over the Internet in the form of files that can be downloaded via FTP. Internet Talk Radio programs, prepared at the National Press Building in Washington, D.C., are 30 minutes to 1 hour in length; a 30-minute program requires about 15 MB of disk space. Acronym: ITR. Internet telephone n. Point-to-point voice communication that uses the Internet instead of the public-switched telecommunications network to connect the calling and called parties. Both the sending and the receiving party need a computer, a modem, an Internet connection, and an Internet telephone software package to make and receive calls. Internet Telephony Service Provider n. See ITSP. Internet telephony n. See VoIP. Internet television n. The transmission of television audio and video signals over the Internet. Internet traffic distribution n. See ITM. Internet traffic management n. See ITM. internetwork1 adj. Of or pertaining to communications between connected networks. It is often used to refer to communication between one LAN (local area network) and another over the Internet or another WAN (wide-area network). See also LAN, WAN. internetwork2 n. A network made up of smaller, interconnected networks. Internetwork Packet Exchange n. See IPX. Internetwork Packet Exchange/Sequenced Packet Exchange n. See IPX/SPX. Internet World n. Series of international conferences and exhibitions on e-commerce and Internet technology sponsored by Internet World magazine. Major conferences include the world’s largest Internet conferences, Internet World Spring and Internet World Fall. Internet Worm n. A string of self-replicating computer code that was distributed through the Internet in November 1988. In a single night, it overloaded and shut down a large portion of the computers connected to the Internet at that time by replicating itself over and over on each computer it accessed, exploiting a bug in UNIX systems. Intended as a prank, the Internet Worm was written by a student at Cornell University. See also back door, worm. InterNIC n. Short for NSFnet (Internet) Network Information Center. The organization that has traditionally registered domain names and IP addresses as well as distributed information about the Internet. InterNIC was formed in 1993 as a consortium involving the U.S. National Science Foundation, AT&T, General Atomics, and Network Solutions, Inc. (Herndon, Va.). The latter partner administers InterNIC Registration Services, which assigns Internet names and addresses. interoperability n. Referring to components of computer systems that are able to function in different environments. For example, Microsoft’s NT operating system is interoperable on Intel, DEC Alpha, and other CPUs. Another example is the SCSI standard for disk drives and other peripheral devices that allows them to interoperate with different operating systems. With software, interoperability occurs when programs are able to share data and resources. Microsoft Word, for example, is able to read files created by Microsoft Excel. interpolate vb. To estimate intermediate values between two known values in a sequence. interpret vb. 1. To translate a statement or instruction into executable form and then execute it. 2. To execute a program by translating one statement at a time into executable form and executing it before translating the next statement, rather than by translating the program completely into executable code (compiling it) before executing it separately. See also interpreter. Compare compile. interpreted language n. A language in which programs are translated into executable form and executed one statement at a time rather than being translated completely (compiled) before execution. Basic, LISP, and APL are generally interpreted languages, although Basic can also be compiled. See also compiler. Compare compiled language. interpreter n. A program that translates and then executes each statement in a program written in an interpreted language. See also compiler, interpreted language, language processor. interprocess communication n. The ability of one task or process to communicate with another in a multitasking operating system. Common methods include pipes, semaphores, shared memory, queues, signals, and mailboxes. Acronym: IPC. inter-record gap n. An unused space between data blocks stored on a disk or tape. Because the speed of disks and tapes fluctuates slightly during operation of the drives, a new data block may not occupy the exact space occupied by the old block it overwrites. The inter-record gap prevents the new block from overwriting part of adjacent blocks in such a case. Acronym: IRG. Also called: gap, interblock gap. interrogate vb. To query with the expectation of an immediate response. For example, a computer may interrogate an attached terminal to determine the terminal’s status (readiness to transmit or receive). interrupt n. A signal from a device to a computer’s processor requesting attention from the processor. When the processor receives an interrupt, it suspends its current operations, saves the status of its work, and transfers control to a special routine known as an interrupt handler, which contains the instructions for dealing with the particular situation that caused the interrupt. Interrupts can be generated by various hardware devices to request service or report problems, or by the processor itself in response to program errors or requests for operating-system services. Interrupts are the processor’s way of communicating with the other elements that make up a computer system. A hierarchy of interrupt priorities determines which interrupt request will be handled first if more than one request is made. A program can temporarily disable some interrupts if it needs the full attention of the processor to complete a particular task. See also exception, external interrupt, hardware interrupt, internal interrupt, software interrupt. interrupt-driven processing n. Processing that takes place only when requested by means of an interrupt. After the required task has been completed, the CPU is free to perform other tasks until the next interrupt occurs. Interrupt-driven processing is usually employed for responding to events such as a key pressed by the user or a floppy disk drive that has become ready to transfer data. See also interrupt. Compare autopolling. interrupt handler n. A special routine that is executed when a specific interrupt occurs. Interrupts from different causes have different handlers to carry out the corresponding tasks, such as updating the system clock or reading the keyboard. A table stored in low memory contains pointers, sometimes called vectors, that direct the processor to the various interrupt handlers. Programmers can create interrupt handlers to replace or supplement existing handlers, such as by making a clicking sound each time the keyboard is pressed. interrupt priority n. See interrupt. interrupt request line n. A hardware line over which a device such as an input/output port, the keyboard, or a disk drive can send interrupts (requests for service) to the CPU. Interrupt request lines are built into the computer’s internal hardware and are assigned different levels of priority so that the CPU can determine the sources and relative importance of incoming service requests. They are of concern mainly to programmers dealing with low-level operations close to the hardware. Acronym: IRQ. interrupt vector n. A memory location that contains the address of the interrupt handler routine that is to be called when a specific interrupt occurs. See also interrupt. interrupt vector table n. See dispatch table. intersect n. An operator in relational algebra, used in database management. Given two relations (tables), A and B, that have corresponding fields (columns) containing the same types of values (that is, they are union-compatible), then INTERSECT A, B builds a third relation containing only those tuples (rows) that appear in both A and B. See also tuple. interstitial n. An Internet ad format that appears in a pop-up window between Web pages. Interstitial ads download completely before appearing, usually while a Web page the user has chosen is loading. Because interstitial pop-up windows don’t appear until the entire ad has downloaded, they often use animated graphics, audio, and other attention-getting multimedia technology that require longer download time. in the wild adj. Currently affecting the computing public, particularly in regard to computer viruses. A virus that is not yet contained or controlled by antivirus software or that keeps reappearing despite virus detection measures is considered to be in the wild. See also virus. intranet n. A private network based on Internet protocols such as TCP/IP but designed for information management within a company or organization. Its uses include such services as document distribution, software distribution, access to databases, and training. An intranet is so called because it looks like a World Wide Web site and is based on the same technologies, yet is strictly internal to the organization and is not connected to the Internet proper. Some intranets also offer access to the Internet, but such connections are directed through a firewall that protects the internal network from the external Web. Compare extranet. intraware n. Groupware or middleware for use on a company’s private intranet. Intraware packages typically contain e-mail, database, workflow, and browser applications. See also groupware, intranet, middleware. intrinsic font n. A font (type size and design) for which a bit image (an exact pattern) exists that can be used as is, without such modification as scaling. Compare derived font. intruder n. An unauthorized user or unauthorized program, generally considered to have malicious intent, on a computer or computer network. See also bacterium, cracker, Trojan horse, virus. intruder attack n. A form of hacker attack in which the hacker enters the system without prior knowledge or access to the system. The intruder will typically use a combination of probing tools and techniques to learn about the network to be hacked. Compare insider attack. Intrusion Countermeasure Electronics n. See ICE (definition 3). intrusion detection n. See IDS. intrusion-detection system n. See IDS. invalid adj. Erroneous or unrecognizable because of a flaw in reasoning or an error in input. Invalid results, for example, might occur if the logic in a program is faulty. Compare illegal. inverse video n. See reverse video. To reverse something or change it to its opposite. For example, to invert the colors on a monochrome display means to change light to dark and dark to light. See the illustration. In a digital electrical signal, to replace a high level by a low level and vice versa. This type of operation is the electronic equivalent of a Boolean NOT operation. Invert. An example showing the effects of inverting the colors on a monochrome display. Invert. An example showing the effects of inverting the colors on a monochrome display. inverted file n. See inverted list. inverted list n. A method for creating alternative locators for sets of information. For example, in a file containing data about cars, records 3, 7, 19, 24, and 32 might contain the value “Red” in the field COLOR. An inverted list (or index) on the field COLOR would contain a record for “Red” followed by the locator numbers 3, 7, 19, 24, and 32. See also field, record. Compare linked list. inverted-list database n. A database similar to a relational database but with several differences that make it much more difficult for the database management system to ensure data consistency, integrity, and security than with a relational system. The rows (records or tuples) of an inverted-list table are ordered in a specific physical sequence, independent of any orderings that may be imposed by means of indexes. The total database can also be ordered, with specified logical merge criteria being imposed between tables. Any number of search keys, either simple or composite, can be defined. Unlike the keys of a relational system, these search keys are arbitrary fields or combinations of fields. No integrity or uniqueness constraints are enforced; neither the indexes nor the tables are transparent to the user. Compare relational database. inverted structure n. A file structure in which record keys are stored and manipulated separately from the records themselves. inverter n. 1. A logic circuit that inverts (reverses) the signal input to it—for example, inverting a high input to a low output. 2. A device that converts direct current (DC) to alternating current (AC). invoke vb. To call or activate; used in reference to commands and subroutines. I/O n. See input/output. I/O-bound adj. See input/output-bound. I/O controller n. See input/output controller. I/O device n. See input/output device. ion-deposition printer n. A page printer in which the image is formed in electrostatic charges on a drum that picks up toner and transfers it to the paper, as in a laser, LED, or LCD printer, but the drum is charged using a beam of ions rather than light. These printers, used mainly in high-volume data-processing environments, typically operate at speeds from 30 to 90 pages per minute. In ion-deposition printers, toner is typically fused to paper by a method that is fast and does not require heat but leaves the paper a little glossy, making it unsuitable for business correspondence. In addition, ion-deposition printers tend to produce thick, slightly fuzzy characters; the technology is also more expensive than that of a laser printer. See also electrophotographic printers, nonimpact printer, page printer. Compare laser printer, LCD printer, LED printer. I/O port n. See port1 (definition 1). I/O processor n. See input/output processor. IO.SYS n. One of two hidden system files installed on an MS-DOS startup disk. IO.SYS in IBM releases of MS-DOS (called IBMBIO.COM) contains device drivers for peripherals such as the display, keyboard, floppy disk drive, hard disk drive, serial port, and real-time clock. See also MSDOS.SYS. IP n. Acronym for Internet Protocol. The protocol within TCP/IP that governs the breakup of data messages into packets, the routing of the packets from sender to destination network and station, and the reassembly of the packets into the original data messages at the destination. IP runs at the internetwork layer in the TCP/IP model—equivalent to the network layer in the ISO/OSI reference model. See also ISO/OSI reference model, TCP/IP. Compare TCP. IP address n. Short for Internet Protocol address. A 32-bit (4-byte) binary number that uniquely identifies a host (computer) connected to the Internet to other Internet hosts, for the purposes of communication through the transfer of packets. An IP address is expressed in “dotted quad” format, consisting of the decimal values of its 4 bytes, separated with periods; for example, 127.0.0.1. The first 1, 2, or 3 bytes of the IP address identify the network the host is connected to; the remaining bits identify the host itself. The 32 bits of all 4 bytes together can signify almost 232, or roughly 4 billion, hosts. (A few small ranges within that set of numbers are not used.) Also called: Internet Protocol number, IP number. See also host, IANA, ICANN, InterNIC, IP, IP address classes, packet (definition 2). Compare domain name. IP address classes Short for Internet Protocol address classes. The classes into which IP addresses were divided to accommodate different network sizes. Each class is associated with a range of possible IP addresses and is limited to a specific number of networks per class and hosts per network. See the table. See also Class A IP address, Class B IP address, Class C IP address, IP address. IP address classes. Each x represents the host-number field assigned by the network administrator. Range of IP Addresses Networks per Class Hosts per Network (maximum number) Class A (/8) 1.x.x.x to 126.x.x.x Class B (/16) 128.0.x.x to 191.255.x.x Class C (/24) 192.0.0.x to 223.255.255.x IP address classes. Each x represents the host-number field assigned by the network administrator. IP aliasing n. See NAT. IPC n. See interprocess communication. ipchains n. See iptables. IP Filter n. Short for Internet Protocol Filter. A TCP/IP packet filter for UNIX, particularly BSD. Similar in functionality to netfilter and iptables in Linux, IP Filter can be used to provide network address translation (NAT) or firewall services. See also firewall. Compare netfilter, iptables. IPL n. See initial program load. IP masquerading n. See NAT. IP multicasting n. Short for Internet Protocol multicasting. The extension of local area network multicasting technology to a TCP/IP network. Hosts send and receive multicast datagrams, the destination fields of which specify IP host group addresses rather than individual IP addresses. A host indicates that it is a member of a group by means of the Internet Group Management Protocol. See also datagram, Internet Group Membership Protocol, IP, MBONE, multicasting. IPng n. Acronym for Internet Protocol next generation. A revised version of the Internet Protocol (IP) designed primarily to address growth on the Internet. IPng is compatible with, but an evolutionary successor to, the current version of IP, IPv4 (IP version 4), and was approved as a draft standard in 1998 by the IETF (Internet Engineering Task Force). It offers several improvements over IPv4 including a quadrupled IP address size (from 32 bits to 128 bits), expanded routing capabilities, simplified header formats, improved support for options, and support for quality of service, authentication, and privacy. Also called: IPv6. See also IETF, IP, IP address. IP number n. See IP address. IPP n. See Internet Printing Protocol. IPSec n. Short for Internet Protocol Security. A security mechanism under development by the IETF (Internet Engineering Task Force) designed to ensure secure packet exchanges at the IP (Internet Protocol) layer. IPSec is based on two levels of security: AH (Authentication Header), which authenticates the sender and assures the recipient that the information has not been altered during transmission, and ESP (Encapsulating Security Protocol), which provides data encryption in addition to authentication and integrity assurance. IPSec protects all protocols in the TCP/IP protocol suite and Internet communications by using Layer Two Tunneling Protocol (L2TP) and is expected to ensure secure transmissions over virtual private networks (VPNs). See also anti-replay, communications protocol, Diffie-Hellman, ESP, IETF, IP, IPv6, Layer L2TP, TCP/IP, packet, virtual private network. IP Security n. See IPSec. IP/SoC Conference and Exhibition n. Acronym for Intellectual Property/System on a Chip Conference and Exhibition. Leading conference and exhibition for executives, architects, and engineers using intellectual property in the design and production of system-on-a-chip semiconductors. The event features product exhibits and forums for the exchange of information. IP splicing n. See IP spoofing. IP spoofing n. The act of inserting a false sender IP address into an Internet transmission in order to gain unauthorized access to a computer system. Also called: IP splicing. See also IP address, spoofing. IP switching n. A technology developed by Ipsilon Networks (Sunnyvale, Calif.) that enables a sequence of IP packets with a common destination to be transmitted over a high-speed, high-bandwidth Asynchronous Transfer Mode (ATM) connection. iptables n. A utility used to configure firewall settings and rules in Linux. Part of the netfilter framework in the Linux kernel, iptables replaces ipchains, a previous implementation. See also netfilter. Compare IP Filter. IP telephony n. Telephone service including voice and fax, provided through an Internet or network connection. IP telephony requires two steps: conversion of analog voice to digital format by a coding/uncoding device (codec) and conversion of the digitized information to packets for IP transmission. Also called: Internet telephony, Voice over IP (VoIP). See also H.323, VoIP. IP tunneling n. A technique used to encapsulate data inside a TCP/IP packet for transmission between IP addresses. IP tunneling provides a secure means for data from different networks to be shared over the Internet. IPv4 n. Short for Internet Protocol version 4. The current version of the Internet Protocol (IP), as compared with the next-generation IP, which is known familiarly as IPng and more formally as IPv6 (IP version 6). See also IP. Compare IPng. IPv6 n. Short for Internet Protocol version 6. The next-generation Internet Protocol from the Internet Engineering Task Force (IETF), IPv6 is now included as part of IP support in many products and in the major operating systems. IPv6 offers several improvements from IPv4, most significantly an increase of available address space from 32 to 128 bits, which makes the number of available addresses effectively unlimited. Usually called IPng (next generation), IPv6 also includes support for multicast and anycast addressing. See also anycasting, IP, IPng. ipvs n. Acronym for IP Virtual Server. See LVS. IPX n. Acronym for Internetwork Packet Exchange. The protocol in Novell NetWare that governs addressing and routing of packets within and between LANs. IPX packets can be encapsulated in Ethernet packets or Token Ring frames. IPX operates at ISO/OSI levels 3 and 4 but does not perform all the functions at those levels. In particular, IPX does not guarantee that a message will be complete (no lost packets); SPX has that job. See also Ethernet (definition 1), packet, Token Ring network. Compare SPX (definition 1). IPX/SPX n. Acronym for Internetwork Packet Exchange/Sequenced Packet Exchange. The network and transport level protocols used by Novell NetWare, which together correspond to the combination of TCP and IP in the TCP/IP protocol suite. IPX is a connectionless protocol that handles addressing and routing of packets. SPX, which runs above IPX, ensures correct delivery. See also IPX, SPX (definition 1). IR n. See infrared. IRC n. Acronym for Internet Relay Chat. A service that enables an Internet user to participate in a conversation on line in real time with other users. An IRC channel, maintained by an IRC server, transmits the text typed by each user who has joined the channel to all other users who have joined the channel. Generally, a channel is dedicated to a particular topic, which may be reflected in the channel’s name. An IRC client shows the names of currently active channels, enables the user to join a channel, and then displays the other participants’ words on individual lines so that the user can respond. IRC was invented in 1988 by Jarkko Oikarinen of Finland. See also channel (definition 2), server (definition 2). IrDA n. Acronym for Infrared Data Association. The industry organization of computer, component, and telecommunications vendors who have established the standards for infrared communication between computers and peripheral devices such as printers. IRE scale n. Short for Institute of Radio Engineers scale. Scale to determine video signal amplitudes as devised by the Institute of Radio Engineers, which is now part of the Institute of Electrical and Electronic Engineers (IEEE). The IRE scale includes a total of 140 units, with 100 up and 40 down from zero. IRG n. See inter-record gap. IRGB n. Acronym for Intensity Red Green Blue. A type of color encoding originally used in IBM’s Color/Graphics Adapter (CGA) and continued in the EGA (Enhanced Graphics Adapter) and VGA (Video Graphics Array). The standard 3-bit RGB color encoding (specifying eight colors) is supplemented by a fourth bit (called Intensity) that uniformly increases the intensity of the red, green, and blue signals, resulting in a total of 16 colors. See also RGB. IRL n. Acronym for in real life. An expression used by many online users to denote life outside the computer realm, especially in conjunction with virtual worlds such as online talkers, IRC, MUDs, and virtual reality. See also IRC, MUD, talker, virtual reality. IRQ n. Acronym for interrupt request. One of a set of possible hardware interrupts, identified by a number, on a Wintel computer. The number of the IRQ determines which interrupt handler will be used. In the AT bus, ISA, and EISA, 15 IRQs are available; in Micro Channel Architecture, 255 IRQs are available. Each device’s IRQ is hardwired or set by a jumper or DIP switch. The VL bus and the PCI local bus have their own interrupt systems, which they translate to IRQ numbers. See also AT bus, DIP switch, EISA, interrupt, IRQ conflict, ISA, jumper, Micro Channel Architecture, PCI local bus, VL bus. IRQ conflict n. The condition on a Wintel computer in which two different peripheral devices use the same IRQ to request service from the central processing unit (CPU). An IRQ conflict will prevent the system from working correctly; for example, the CPU may respond to an interrupt from a serial mouse by executing an interrupt handler for interrupts generated by a modem. IRQ conflicts can be prevented by the use of Plug and Play hardware and software. See also interrupt handler, IRQ, Plug and Play. irrational number n. A real number that cannot be expressed as the ratio of two integers. Examples of irrational numbers are the square root of 3, pi, and e. See also integer, real number. IRSG n. See Internet Research Steering Group. IRTF n. See Internet Research Task Force. IS n. See Information Services. ISA n. Acronym for Industry Standard Architecture. A bus design specification that allows components to be added as cards plugged into standard expansion slots in IBM Personal Computers and compatibles. Originally introduced in the IBM PC/XT with an 8-bit data path, ISA was expanded in 1984, when IBM introduced the PC/AT, to permit a 16-bit data path. A 16-bit ISA slot actually consists of two separate 8-bit slots mounted end-to-end so that a single 16-bit card plugs into both slots. An 8-bit expansion card can be inserted and used in a 16-bit slot (it occupies only one of the two slots), but a 16-bit expansion card cannot be used in an 8-bit slot. See also EISA, Micro Channel Architecture. ISAM n. See indexed sequential access method. ISAPI n. Acronym for Internet Server Application Programming Interface. An easy-to-use, high-performance interface for back-end applications for Microsoft’s Internet Information Server (IIS). ISAPI has its own dynamic-link library, which offers significant performance advantages over the CGI (Common Gateway Interface) specification. See also API, dynamic-link library, Internet Information Server. Compare CGI. ISAPI filter n. A DLL file used by Microsoft Internet Information Server (IIS) to verify and authenticate ISAPI requests received by the IIS. ISA Server n. See Internet Security and Acceleration Server. ISA slot n. A connection socket for a peripheral designed according to the ISA (Industry Standard Architecture) standard, which applies to the bus developed for use in the 80286 (IBM PC/AT) motherboard. See also ISA. ISC n. See Internet Software Consortium. ISDN n. Acronym for Integrated Services Digital Network. A high-speed digital communications network evolving from existing telephone services. The goal in developing ISDN was to replace the current telephone network, which requires digital-to-analog conversions, with facilities totally devoted to digital switching and transmission, yet advanced enough to replace traditionally analog forms of data, ranging from voice to computer transmissions, music, and video. ISDN is available in two forms, known as BRI (Basic Rate Interface) and PRI (Primary Rate Interface). BRI consists of two B (bearer) channels that carry data at 64 Kbps and one D (data) channel that carries control and signal information at 16 Kbps. In North America and Japan, PRI consists of 23 B channels and 1 D channel, all operating at 64 Kbps; elsewhere in the world, PRI consists of 30 B channels and 1 D channel. Computers and other devices connect to ISDN lines through simple, standardized interfaces. See also BRI, channel (definition 2), PRI. ISDN terminal adapter n. The hardware interface between a computer and an ISDN line. See also ISDN. I seek you n. See ICQ. ISIS or IS-IS n. Acronym for Intelligent Scheduling and Information System. A toolkit designed to help prevent and eliminate faults in manufacturing systems. Developed in 1980 at Cornell University, ISIS is now available commercially. ISLAN n. See isochronous network. ISMA n. Acronym for Internet Streaming Media Alliance. A nonprofit organization promoting the adoption of open standards for the streaming of rich media over Internet Protocol (IP) networks. ISMA membership consists of a number of technology companies and groups including Apple Computer, Cisco Systems, IBM, Kasenna, Philips, and Sun Microsystems. See also Windows Metafile Format. ISO n. Short for International Organization for Standardization (often incorrectly identified as an acronym for International Standards Organization), an international association of 130 countries, each of which is represented by its leading standard-setting organization—for example, ANSI (American National Standards Institute) for the United States. The ISO works to establish global standards for communications and information exchange. Primary among its accomplishments is the widely accepted ISO/OSI reference model, which defines standards for the interaction of computers connected by communications networks. ISO is not an acronym; rather, it is derived from the Greek word isos, which means “equal” and is the root of the prefix “iso-.” ISO 8601:1988 n. A standard entitled “Data elements and interchange formats” from the International Organization for Standardization (ISO) that covers a number of date formats. ISO 9660 n. An international format standard for CD-ROM adopted by the International Organization for Standardization (ISO) that follows the recommendations embodied in the High Sierra specification, with some modifications. See also High Sierra specification. ISOC n. See Internet Society. isochronous network n. A type of network defined in the IEEE 802.9 specification that combines ISDN and LAN technologies to enable networks to carry multimedia. Also called: Integrated Services LAN, ISLAN. A display method for three-dimensional objects in which every edge has the correct length for the scale of the drawing and in which all parallel lines appear parallel. An isometric view of a cube, for example, shows the faces in symmetrical relation to one another and the height and width of each face evenly proportioned; the faces do not appear to taper with distance as they do when the cube is drawn in perspective. See the illustration. Compare perspective view. Isometric view. A cube in isometric view and in perspective view. Isometric view. A cube in isometric view and in perspective view. ISO/OSI reference model Short for International Organization for Standardization Open Systems Interconnection reference model. A layered architecture (plan) that standardizes levels of service and types of interaction for computers exchanging information through a communications network. The ISO/OSI reference model separates computer-to-computer communications into seven protocol layers, or levels, each building—and relying—upon the standards contained in the levels below it. The lowest of the seven layers deals solely with hardware links; the highest deals with software interactions at the application-program level. It is a fundamental blueprint designed to help guide the creation of networking hardware and software. See the illustration. Also called: OSI reference model. ISO/OSI reference model. ISO/OSI reference model. ISP n. Acronym for Internet service provider. A business that supplies Internet connectivity services to individuals, businesses, and other organizations. Some ISPs are large national or multinational corporations that offer access in many locations, while others are limited to a single city or region. Also called: access provider, service provider. ISSE n. See SSE. ISV n. See independent software vendor. IT n. Acronym for Information Technology. See Information Services. italic n. A type style in which the characters are evenly slanted toward the right. This sentence is in italics. Italics are commonly used for emphasis, foreign-language words and phrases, titles of literary and other works, technical terms, and citations. See also font family. Compare roman. Itanium n. An Intel microprocessor that uses explicitly parallel instruction set computing and 64-bit memory addressing. iterate vb. To execute one or more statements or instructions repeatedly. Statements or instructions so executed are said to be in a loop. See also iterative statement, loop. iterative statement n. A statement in a program that causes the program to repeat one or more statements. Examples of iterative statements in Basic are FOR, DO, REPEAT..UNTIL, and DO..WHILE. See also control statement. ITI n. See Intelligent Transportation Infrastructure. I-time n. See instruction time. ITM n. Short for Internet traffic management. The analysis and control of Internet traffic to improve efficiency and optimize for high availability. With ITM, Web traffic is distributed among multiple servers using load balancers and other devices. See also load balancing. ITR n. See Internet Talk Radio. ITSP n. Acronym for Internet Telephony Service Provider. A business that supplies PC-to-telephone calling capabilities to individuals, businesses, and organizations. Through an ITSP, calls initiated on a PC travel over the Internet to a gateway that, in turn, sends the call to the standard public switched phone network and, eventually, to the receiving telephone. See also ISP, telephony. ITU n. Acronym for International Telecommunication Union. An international organization based in Geneva, Switzerland, that is responsible for making recommendations and establishing standards governing telephone and data communications systems for public and private telecommunications organizations. Founded in 1865 under the name International Telegraph Union, it was renamed the International Telecommunication Union in 1934 to signify the full scope of its responsibilities. ITU became an agency of the United Nations in 1947. A reorganization in 1992 aligned the ITU into three governing bodies: the Radiocommunication Sector, the Telecommunication Standardization Sector (ITU-TSS, ITU-T, for short; formerly the CCITT), and the Telecommunication Development Sector. See also ITU-T. ITU-T n. The standardization division of the International Telecommunication Union, formerly called Comité Consultatif International Télégraphique et Téléphonique (CCITT). The ITU-T develops communications recommendations for all analog and digital communications. Also called: ITU-TSS. See also CCITT Groups 1-4, ITU. ITU-TSS n. See ITU-T. ITU-T V series n. See V series. ITU-T X series n. See X series. iTV n. Acronym for Interactive television. A communications medium combining television with interactive services. iTV offers two-way communications between users and communications providers. From their televisions, users can order special programming, respond to programming options, and access the Internet and additional services such as instant messaging and telephone functions. IVR n. See interactive voice response. IVUE n. A proprietary image format (from Live Pictures) that allows files to be adjusted to screen resolution at any zoom level. i-way n. See Information Superhighway.
<urn:uuid:17a90fc6-1a9a-46f3-bc9b-90ab3dbc4b31>
CC-MAIN-2016-26
http://flylib.com/books/en/2.892.1.14/1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00198-ip-10-164-35-72.ec2.internal.warc.gz
en
0.885061
28,382
3.03125
3
Parents and kids and money As parents, we understand the importance of literacy. We sit for hours reading with our children. However, children must be “literate” about money matters, too. Learning how to think about money and manage it wisely is an equally important life skill. We must patiently help our kids “sound out” the many ways to control money. Our kids will learn by doing. Some lessons will be thrilling. Others will be frustrating, even painful. In the end, we hope that our children will grow into financially responsible adults. The rewards are life-altering: living within their means, free from the anxieties of debt, and secure in their future. This section will give you tips and tools for teaching money literacy – and help you assess your ability to model good financial habits. Read on!
<urn:uuid:8a29a2a6-38a3-48ad-a679-051cad429adb>
CC-MAIN-2016-26
http://www.themint.org/parents/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395679.92/warc/CC-MAIN-20160624154955-00150-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957141
173
3.53125
4
Aerospace tapes are used in construction, assembly, maintenance, and repair in the aerospace industry. They are primarily used for masking, wire wrapping, insulating, painting and stripping, vibration dampening, and protecting surfaces. Tapes are either single or double sided and permanent or removable. A range of materials make aerospace tapes suitable for different environments and applications. Polyimide tape is very strong and heat resistant. It is used to bond silicone and other difficult surfaces. Polyurethane tapes are extremely durable and are used to protect surfaces. Aluminum foil tapes are used to damp vibration and mask surfaces during paint stripping. Glass cloth aerospace tape is extremely flame retardant, solvent resistant, and used to protect heat-sensitive devices in areas exposed to high temperatures. Aerospace tapes are also used in vacuum sealing, fluid line identification, carpet installation, and for providing a moisture barrier.
<urn:uuid:f6546d2a-5429-481b-999f-33600e394116>
CC-MAIN-2016-26
http://www.rshughes.com/c/Aerospace-Tapes/3003/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393533.44/warc/CC-MAIN-20160624154953-00184-ip-10-164-35-72.ec2.internal.warc.gz
en
0.902577
176
2.59375
3
The Bone Growing Chamber: A New Model to Investigate Spontaneous and Guided Bone Regeneration of Artificial Defects in the Human Jawbone Spontaneous bone repair and regeneration of jawbone defects have been insufficiently studied in the dental literature. The present study analyzes a new human model designed to evaluate the basis for spontaneous bone regeneration in human jawbones. Hollow titnaium cylinders, termed bone growing chambers, were prepared with commercially pure titanium. Ten volunteers undergoing routine implant surgery were enlisted. A properly calibrated drill was used to prepare the bone-growing-chamber bed. The bone growing chamber was inserted inside the bone defect, and care was taken to submerge the cylinder at the level of the bone crest. After an adequate healing period, the bone growing chambers were retrieved with a small quantity of peripheral bo ne using a calibrated trephine bur. The retrieved specimens were processed to obtain thin undecalcified ground sections. The stable bone growing chambers showed bone tissue inside the growing space. The maturity of the regenerated bone was related to the time of removal. The bone growing chamber provides a well-defined space that is easy to preapre and to retrieve; it s dimensions are always identical and it allows quantitative measurements of bone regeneration inside the chamber space.
<urn:uuid:1d8bb059-cec7-438c-ad42-ee776d4209d1>
CC-MAIN-2016-26
http://www.quintpub.com/journals/prd/abstract.php?iss2_id=492&article_id=5963&article=5&title=The%20Bone%20Growing%20Chamber:%20A%20New%20Model%20to%20Investigate%20Spontaneous%20and%20Guided%20Bone%20Regeneration%20of%20Artificial%20Defects%20in%20the%20Human%20Jawbone
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395160.19/warc/CC-MAIN-20160624154955-00058-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93336
259
2.515625
3
In the pink month or the National Breast Cancer Awareness Month, we publish a report below to remind our readers that diet is important in preventing breast cancer – a disease that will eventually develop in one in eight women in the United States in their lifetime. Eating a Mediterranean diet may help significantly reduce risk of breast cancer in postmenopausal women, according to a new study published in the July 14, 2010 issue of American Journal of Clinical Nutrition suggests. The study led by Antonia Trichopoulou from Epidemiology and Medical Statistics in Athens, Greece, along with colleagues from the Harvard School of Public Health showed that women whose diet was 2 points closer on a 0-9 scale to traditional Mediterranean diet were 22 percent less likely to develop breast cancer. For the study, the researchers followed up on 14,807 women in the European Protective Investigation into Cancer and Nutrition cohort in Greece for an average of 8.8 years and identified 240 incident cases of breast cancer. Participants’ dietary patterns were evaluated on a 9-point scale for similarity to the traditional Mediterranean diet. In the entire cohort, an increase of 2 points in the diet score was associated with a 12 percent reduction in the breast cancer risk. But the researchers said the association was not significant. A diet 2 points closer to the Mediterranean diet did not seem to reduce risk of breast cancer among premenopausal women, but among postmenopausal it was associated with a 22 percent reduced risk of breast cancer. A health observer suggested that the Mediterranean diet may be more protective than what the study shows due to the possibility that the assessment of the study participants’ diets could introduce errors. Olive oil, a key component in the Mediterranean diet, has been associated with a lower risk of breast cancer, according to the background information in the study report by the researchers. Monounsaturated fatty acids in olive oil are at least partially responsible for the protective effect, the health observer suggested, because the use of olive oil means the participants used less vegetables oils such as corn oil and soybean oil, which contain tumor-promoting omega-6 fatty acids. Breast cancer is expected to be diagnosed in more than 175,000 women and kill about 50,000 each year in the United States, according to the National Cancer Institute. The good news is that breast cancer is in many cases preventable. From foodconsumer.
<urn:uuid:c4771066-26de-44eb-a5e6-11ee41ab934b>
CC-MAIN-2016-26
http://didyoudiet.com/blog/mediterranean-diet-linked-to-lower-risk-of-breast-cancer-video/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395166.84/warc/CC-MAIN-20160624154955-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958493
483
2.9375
3
By David Ponce Remember how in the movie The Matrix, humans were used as energy sources by the machines? I personally thought the idea was inefficient; why not make batteries or something? But still, it appears that we are now the machines and have been able to rig a poor cockroach up with electrodes and squeeze out some measurable amount of electricity. “Maximum power density reached nearly 100 microwatts per square centimeter at 0.2 volts. Maximum current density was about 450 microamps per square centimeter.” It’s the chemical within the roach that power this particular reaction. And if you want the gritty details of how it was done, just hit the jump for a fuller description and links. The key to converting the chemical energy is using enzymes in series at the anode. The first enzyme breaks the sugar, trehalose, which a cockroach constantly produces from its food, into two simpler sugars, called monosaccharides. The second enzyme oxidizes the monosaccharides, releasing electrons. The current flows as electrons are drawn to the cathode, where oxygen from air takes up the electrons and is reduced to water. After testing the system using trehalose solutions, prototype electrodes were inserted in a blood sinus in the abdomen of a female cockroach, away from critical internal organs. “Insects have an open circulatory system so the blood is not under much pressure,” Ritzmann explained. “So, unlike say a vertebrate, where if you pushed a probe into a vein or worse an artery (which is very high pressure) blood does not come out at any pressure. So, basically, this is really pretty benign. In fact, it is not unusual for the insect to right itself and walk or run away afterward.” The researchers found the cockroaches suffered no long-term damage, which bodes well for long-term use. To determine the output of the fuel cell, the group used an instrument called a potentiostat. Maximum power density reached nearly 100 microwatts per square centimeter at 0.2 volts. Maximum current density was about 450 microamps per square centimeter. The study was five years in the making. Progress stalled for nearly a year due to difficulties with trehalase – the first enzyme used in the series. Lee suggested they have the trehalase gene chemically synthesized to generate an expression plasmid, which is a DNA molecule separate from chromosomal DNA, to allow the production of large quantities of purified enzyme from Escherichia coli. “Michelle then began collecting enzyme that proved to have much higher specific activities than those obtained from commercial sources,” Lee said. “The new enzyme led to success.”
<urn:uuid:5fc2047a-ec11-4883-b05e-0db7fb9c3d7a>
CC-MAIN-2016-26
http://www.ohgizmo.com/2012/02/02/we-are-now-able-to-harvest-electricity-from-cockroaches/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00122-ip-10-164-35-72.ec2.internal.warc.gz
en
0.949695
571
3.40625
3
At least two mutated RNA-binding proteins, TDP-43 and FUS, have been shown to cause the devastating neurological disease ALS (amyotrophic lateral sclerosis), more commonly known as Lou Gehrig's disease. But those are only two proteins within about 250 others -- some of which, when mutated or misfolded -- may also be implicated in ALS and other neurodegenerative conditions. The lab of James Shorter, PhD, associate professor of Biochemistry and Biophysics, set out to isolate new culprits by focusing on RNA-binding proteins that harbor particular prion-like domains (PrLDs). These PrLDs normally help to assemble specific RNA complexes, but their mutated forms contribute to misfolded proteins, which tend to assemble into fibrils that disrupt RNA metabolism, perpetuating the neurological abnormalities that are at the root of ALS and other neurodegenerative disorders. A recent study from the Shorter group identified a link between PrLD mutations in the proteins hnRNPA2B1 and hnRNPA1 with ALS. They found these mutations present in two families with a rare inherited form of muscle, bone, and neurologic degeneration, and in another with familial ALS. The findings show that the diseases may be initiated either by environmental stress on the PrLDs or by mutations, but the presence of the PrLDs in RNA-binding proteins appears to mark them as definite candidates for causing ALS and other neurodegenerative conditions. Next, Shorter says, "We aim to understand how the prion-like domain enables misfolding and whether these RNA-binding proteins access prion-like molecules. We are also elucidating methods to prevent or reverse the misfolding of various RNA-binding proteins with prion-like domains and mitigate their toxicity."
<urn:uuid:c3f6cff6-71b3-4c89-b307-003510981ccf>
CC-MAIN-2016-26
http://www.uphs.upenn.edu/news/research/rna/neuro.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00141-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938502
372
3.5
4
New threat from Canadian livestock The sphere of antibiotic resistance has gotten wider, and more dangerous. A newly identified gene, MCR-1, enables regular bacteria to develop resistance against the strongest antibiotics available. In January 2016, the gene was detected in Canadian beef. MCR-1 is a plasmid that can replicate autonomously in suitable hosts; which in this case are bacteria which were previously susceptible to medication. The gene produces a chemical which makes bacteria immune to powerful antibiotics, such as Colistin, which is used as a last-ditch effort against infections when all other treatments fail. The gene most likely originated in livestock, but it has now been seen in bacterial samples taken from human beings. Microbiologists have long been pointing to agricultural use of human antibiotics as an exacerbating factor in an already deadly problem. A new strategy may mitigate this effect: the end of sub-therapeutic use in farm animals. Antibiotics and Animals In December 2016, both Health Canada and the U.S. Food and Drug Administration will place restrictions on antibiotic use in animals. Livestock producers will require veterinary prescriptions before providing their animals with antibiotics. The goal of the new regulations is to significantly reduce its use, providing antibiotics strictly as a method of preventing or treating on-going infections. Agriculture has been incorporating antibiotics into animal production for over half a century. When given to healthy animals, the medication promotes weight gain. Animals eat the same amount of food, but put on more weight. For those in the animal production industry, the medical breakthrough had profound economic implications. However, with widespread use, bacteria have more opportunities to resist the antibiotics. As living organisms with short generational time spans, bacteria evolve to develop immunity to these antibiotics. The more frequently bacterial strains come into contact with the medication, the more quickly immunity develops. The prevalence of antibiotics, not only in farm animals, but medical settings as well, have led to the imminent crisis – a point in time where infectious bacteria are immune to all forms of antibiotics. The promising case of Ractopamine The Health Canada and FDA ban will come into effect December 2016. The ban is expected to change operational practices for feed and livestock producers. Some, including Dr. John Prescott, a professor emeritus at Guelph University’s pathobiology department, says that in light of the ban, “growth promotion use in food animals in North America is going to come to an end”. However, this is not necessarily the case. Ractopamine is an alternative growth promoter used by many North American farmers. Four versions of the beta-antagonist has been approved by the Canadian Food Inspection Agency for use in animal feed. Although North American governments and farmers have by and large embraced Ractopamine, its use is still controversial. Over 160 countries have banned the additive, citing possible toxic effects in animals and people. Several significant markets ban meat imports that contain Ractopamine. When Russia created this policy in 2012, hog producers in Alberta and B.C. collectively ended the long standing practice of using the growth stimulus in pork production. The executive director of Alberta Pork, Darcy Fitzgerald, stated that the transition was not very difficult, as a large portion of Alberta hog producers never used the growth stimulant. Approximately 50% of producers were using Ractopamine before the ban. A similar transition may be possible in the wake of the new policy on antibiotics. According to a report from the U.S. Department of Agriculture, 40% of pigs, 50% of chickens, and 75% of cattle come from farms where antibiotics are used as growth enhancers. This means that a significant number of livestock producers are able to maintain production without the sub-therapeutic use of antibiotics. This report also indicates that animals fed the antibiotics only see a 1-3% increase in weight. In many cases, the effect is not even statistically significant. New policy: too little too late? Canadian Animal producers have already shown that they are adaptable to market demands. In the case of Ractopamine and the Western Canada hog industry, a product which was once seen in half of all products is now virtually gone. A federal policy, however, may not have the same effect. In a report from the CBC, critics argue that the changes implemented by Health Canada may not significantly reduce the amount of antibiotics used in Canadian farms. The ban specifically prohibits sub-therapeutic use of antibiotic use for the purpose of growth promotion. A concern is that the same amount of antibiotics will be given to animals, but instead of the intended use being ‘growth promotion’, it will be considered ‘disease prevention’. In Canada, when antibiotics are purchased for disease prevention, a prescription is required. However, Canadian law permits farmers to import antibiotics for their own purpose without authorization from Canadian veterinarian or medical professional. According to an evaluation conducted by Health Canada and the Public Health Agency of Canada, this may be an ever greater threat to resistance than antibiotics used to promote animal growth. On the surface, the concerted Health Canada and FDA regulations appear to be a bold step in the fight against antibiotic resistance. Upon closer examination, there is no guarantee that the policy will decrease the amount of antibiotics given to Canadian livestock, on average of 1.6 million kg/year. As strains of resistant bacteria emerge at an increasing pace, the public requires a more thorough response. The startling discovery of the MCR-1 gene in Canadian beef means that action cannot come soon enough.
<urn:uuid:77add61f-815b-4897-bc58-62793f1c00cf>
CC-MAIN-2016-26
http://natoassociation.ca/meat-in-the-age-of-antibiotic-resistance/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00022-ip-10-164-35-72.ec2.internal.warc.gz
en
0.950057
1,138
3.359375
3
A year after rolling out Dance Dance Revolution video games in West Virginia’s public schools, state officials say the program has helped improve the health of overweight students there. A study conducted by West Virginia University and the West Virginia Public Employees Insurance Agency found that obsese students playing the dance game halted any weight gain while their peers continued to put on average of six pounds over the testing period. The study also found the students improved their aerobic capacity and their arterial functions, which allow for better circulation of blood. The benefit is derived from the physical demands of the game, which require players to stomp on a touch pad in time with musical prompts on a screen. Perhaps most promising, the study found that the exercise through DDR helped improve self confidence and increased the students’ willingness to try new activities. That’s a big plus for the students, who are normally shy about undertaking new activities, especially ones that involve physical exertion. The 24-week at-home clinical study tested 50 students ages 7-12 who were in the 85th percentile of the body mass index. The study required participants to play the game five days per week for at least 30 minutes and to record their activity while researchers monitored their weight, blood pressure, body mass index, arterial function, fitness levels and attitudes towards exercise. The results would seem to validate the strategy of West Virginia health officials who helped pay for DDR games in all of the state’s 765 public schools. The state partnered with Konami Digital Entertainment, which has sold more than 3 million copies of the game in the United States since 2001. Researchers recognize the game is not a magic bullet. Indeed, the kids did not appear to lose weight over the course of the study. But they said the findings suggest active games can be a practical tool in helping attack childhood obesity. “The answer is clearly more exercise, but the challenge is finding something that appeals to this generation of technologically sophisticated children,” said Dr. Linda Carson, a professor at WVU who conducted the study. “DDR combines the appeal of “screen time” within a physical activity format. We are excited that we can now demonstrate that it is a valuable health tool and something kids enjoy.”
<urn:uuid:897d9c5a-086d-4353-a1fb-34c60ff63832>
CC-MAIN-2016-26
http://blog.sfgate.com/techchron/2007/02/01/ddr-video-game-shown-to-stop-weight-gain-improve-health/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956958
459
2.78125
3
Its orbit period is 90 minutes and has a spatial resolution is less than a metre. According to sources the satellite is designed to monitor Indian borders and helps in anti-infiltration and anti-terrorist operations. Potential applications also include tracking hostile ships at sea that could pose a military threat. According to ISRO the satellite enhances its capability of earth observation, especially during floods, cyclones, landslides and in disaster management. Photo: Reuters Image: PSLV C-12 blasts off from Satish Dhawan space centre at Sriharikota, about 100 km (62 miles) north of the southern Indian city of Chennai, April 20, 2009.
<urn:uuid:b4ca6785-a6fd-45a6-b889-b9e85c022088>
CC-MAIN-2016-26
http://www.indiatimes.com/boyz-toyz/machines/awesome-facts-about-indian-spy-satellites-74956-5.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00006-ip-10-164-35-72.ec2.internal.warc.gz
en
0.954927
131
2.890625
3
How did the occasion of the Great Conference of Religions arise? A Hindu swami, Shugan Chander, who had been undertaking work of social service for a few years thought that people must be brought together on a common platform. He initiated the idea of the conferences of great religions. The first conference took place in Ajmer. The second conference was held in Lahore in 1896. [Next Question - Study Guide]
<urn:uuid:363ca647-4148-458f-9799-dbd1e1c54d5a>
CC-MAIN-2016-26
http://www.alislam.org/library/links/1-01.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395560.69/warc/CC-MAIN-20160624154955-00026-ip-10-164-35-72.ec2.internal.warc.gz
en
0.980735
89
3.359375
3
In 1926, members of the American Legion in Boulder put up a large billboard just south of the intersection of U.S. 287 and Arapahoe Road. There, tourists were directed to Boulder -- the jumping off point for summer tours to the glaciers. Throughout the 1920s, the glaciers (in what is now the Indian Peaks Wilderness area) were Boulder County's biggest tourist attractions. Advertising campaigns were in full swing for railroad visitors, as well. "Boulder is going to get the best travel because it has the best-selling proposition, the finest scenery, and the glaciers," a Burlington Railroad official told a reporter in 1921. "In thirty-six hours from Chicago, people can see more than the Alps provide in thrills and mountain grandeur." The Boulder Chamber of Commerce and the Denver & Interurban Railroad followed the Burlington Railroad's lead with their own billboards and a joint distribution of 75,000 advertising brochures. Although the mountains that comprise the rugged western boundary of Boulder County include several glaciers and snow fields, the major tourist attraction was the largest one, the Arapaho Glacier -- officially spelled without an "e." Whether tourists arrived by automobile or train, however, once they reached Boulder they completed the rest of their journey in a touring car and then on horseback. City engineer Fred Fair operated the Glacier Route Line and packed his customers into large automobiles that held seven passengers. Visitors were advised to bring overcoats and heavy wraps. Ladies were told to have "veils and other means for protecting themselves" against sunburn while at high elevations. First, the sightseers were chauffeured to a base camp northwest of Nederland, at Rainbow Lakes, where they were served coffee and sandwiches. Then they got on horseback, some for the first time. A few hours later, and many feet higher, the flatlanders walked and slid on ice. According to newspaper reports, they also delighted in throwing snowballs. Then the mountain tourists rode back down into camp for a cookout. Afterwards, they climbed back into the automobiles for their return trips to Boulder. Intending to capitalize on the tourism industry, Fair obtained permission from the Boulder County Commissioners to build a road to the top of an overlook (called the "saddle") above Arapaho Glacier. The proposed scenic lookout would be complete with a shelter house and refreshment stand. The first section of the road to the present Rainbow Lakes campground was completed in 1924 and is still in use today. Realizing that his project would cost at least $100,000, Fair teamed up with a Colorado Springs millionaire who suggested a toll road. Not everyone was happy about the plan, though, and it never materialized. In July 1929, Fred Fair promoted his glacier tours with a truckload of snow that he hauled from the mountains, then loaded onto a float in an Independence Day parade. Pretty girls were said to have thrown snowballs at bystanders. But, the tours didn't last. That same year, the city of Boulder purchased the 3,869-acre Arapaho Glacier watershed from the federal government for $1.25 per acre. The area was closed to public use, except for annual summer hikes sponsored by the Boulder Chamber of Commerce. By then, the glaciers' promotional years were over. Silvia Pettem and Carol Taylor write on history for the Daily Camera, alternating weeks. Email Silvia at [email protected], Carol at [email protected], or write to the Daily Camera, 5450 Western Ave., Boulder 80301-2709.
<urn:uuid:5187bdd8-1e8e-43d8-beb5-a223924dc919>
CC-MAIN-2016-26
http://www.dailycamera.com/lifestyles/ci_21218243/entertainment/entertainment/classicalmusic
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00076-ip-10-164-35-72.ec2.internal.warc.gz
en
0.972028
740
2.765625
3
Every musician claims to study and communicate a composer's intentions. For recent work, the task is facilitated by records that can document the author's plan. For music of the distant past, the task is largely reduced to sheer speculation, confounded by impenetrable barriers of lost customs, obscure notation and hopelessly vague literary descriptions. Perhaps the greatest challenge lies in music between these extremes, where we have ample but often confusing clues. Consider Gustav Mahler's wondrous Symphony # 4 in G major, the most accessible of his works that presents in microcosm all the characteristics of his distinctive vision - his love of nature, grotesque humor, scintillating orchestration, integration of song and abstract instrumentals, and a constant search for meaning amid the great questions of life - lacking only the epic scope that can alienate the unconverted from his other symphonic output. As with all deeply personal art, we naturally wonder what the creator meant to convey. Few would have asked Bach, Haydn or Mozart what their pieces meant, as their work either was overtly religious or accepted as pure arrangements of sound. In the 19th century, though, music became a vehicle for individual expression, and so audiences demanded to know just what a composer intended to convey. Following the conventions of his time, Mahler had provided detailed programmatic descriptions for his first three symphonies. But after a three-year dry spell, when he wrote his Fourth Symphony in the summers of 1899 and 1900 his attitude changed to reflect his burgeoning career. Better known as a conductor than a composer during his lifetime, Mahler wrote mainly during summer vacation breaks from his regular employment. In 1897 he had assumed the grueling mission of heading the prestigious but unruly Vienna Opera, which he transformed into an artistic marvel through intensive, exhaustive and exacting control. Once at the peak of his profession, he could afford to snub audience expectations and came to condemn program notes as superficial, preferring that listeners find meaning by applying their own intuition to the internal logic and content of his music, rather than regard it as a mere illustration of preconceived stories. As he put it: "I know the most wonderful names for the movements but I will not betray them to the rabble of critics and listeners so that they can subject them to banal misunderstandings and distortions." Yet, to his associates Mahler dropped ample hints as to his intentions. Mahler originally conceived his Fourth as a "humoresque" in six movements, alternating instrumentals and vocals, its focus rising from the earthly to heaven. He analogized the atmosphere of the entire symphony to the sky, whose uniform blue occasionally darkens yet always reemerges fresh and renewed. While his first movement was grounded in traditional formal structure (a sonata template) and arose from his intensive study of Bach, Mahler noted that its components are rearranged in increasingly complex patterns, like a kaleidoscope sifting through mosaic bits of a picture. He intended the second movement, a sinister scherzo relieved by two bucolic trios, as a dance of death, harkening back to the baroque technique of scordatura, in which a solo violin is tuned a whole tone higher than standard (A-E-B-F#) to produce a thinner, spectral sound. (Even beyond that, Mahler specifies playing "wie ein Fidel" ("like a medieval fiddle") - that is, crudely with no vibrato or other modern techniques.) He suggested that the third movement, an adagio set of variations built upon two contrasting but related themes, reflected his mother's sad face, constantly loving and pardoning in spite of immense suffering. The only radical gesture is saved for the finale toward which all the rest points with subtle thematic premonitions - a song written in 1892 that was to have been the seventh movement of his already massive Symphony # 3, but from which he wisely excised it. The text is drawn from Des Knaben Wunderhorn ("The Youth's Magic Horn"), an anthology of folk poetry his sister had given him in early 1892 and which had inspired all his output of that decade. While in the traditional form of a rondo, it eschews the conventional rousing symphonic culmination for a simple but ravishing naïve stroll through the joys of heaven. For Mahler, when mankind, full of wonder, asks what it all means, only a child can answer. Perhaps reflecting the human propensity to demand explanations of the abstract, later admirers have gone further to infer programs to augment the composer's vague suggestions. The most convincing is by Paul Bekker, who sees the variegated first movement as representing a journey through the existing world, the scherzo as the liberation of death, the variations as a metamorphosis through new possibilities of consciousness, and the fourth as the ultimate blissful fulfillment of our wishes. Yet, Theodor Adorno, among others, is less willing to step aside from the characteristic angst and depth that infuse Mahler's other work, charging the Fourth with mock emotions, irony and ambiguity to convey a message of pervasive sadness to negate its surface calm, reflecting his professional frustration at the Vienna Opera and no longer celebrating childlike innocence but rather mourning its loss. It's hard to believe nowadays that such a thoroughly lovely work encountered indifference and hostility by both audiences and critics. The 1901 Munich premiere, led by the composer, was booed and condemned as baffling and tasteless. The local antipathy may have stemmed from thwarted expectations for a colossal successor to Mahler's earlier work or perhaps the lack of insightful programmatic guidance, but clearly was fueled by the professional enmity created by his reforms at the Opera and further stoked by anti-Semitism (even though Mahler had converted to Catholicism as a condition of his Vienna post - an irrelevant detail to devoted bigots). Yet, even in America, that cradle of tolerance and free thinking, a 1904 New York concert was greeted as a "drooling and emasculated musical monstrosity, … the most painful musical torture to which [the critic] has been compelled to submit." The Fourth was the last of Mahler's nine symphonies to draw inspiration from his fascination with Des Knaben Wunderhorn. Warm and lyrical, and perhaps an escape from his personal problems, it was Mahler's glance back through the concision and simplicity of music of the past before he plunged ahead to the dense and massive brooding works with which he would conclude his career. As summarized by biographer Henry-Louis de la Grange, the Fourth combined deliberate simplicity with a wealth of invention, borrowing formulas from the past, enriched and transformed with inexhaustible imagination, while its restricted emotional palette could have been meant to rebuff critics who accused him of resorting to grandiose gestures. Thus, we find polyphony of the 17th century, the forms and light scoring of the 18th, motivic development of the 19th and even a glance ahead to the extreme intensification of the "new Viennese school" of the 20th. But regardless of what his Fourth means, how did Mahler expect it to be performed? Mahler's own conducting reportedly was full of tension, poised uneasily between precision and passion, clarity and spontaneity. While he never cut any records, he did make four 1905 piano rolls, including the final movement of the Fourth, but it's bizarre. His score contains the admonitions that: "It is of the greatest importance that the singer be extremely discreetly accompanied" and "To be sung with childlike and serene expression, absolutely without parody." Yet, his playing is full of quirky rubato and his arrangement largely disregards the vocal line and the many detailed expressive and dynamic felicities specified in the score, thus seemingly to violate the express interpretive directives he so pointedly specified for others. It's tempting to dismiss the roll as an anomaly - even aside from the challenge of condensing a 17-stave score into two hands, Mahler never was deemed a virtuoso pianist and may have been unnerved by his first (and only) exposure to the demands of the unfamiliar technology. But since he likely was far more tempted to extemporize when playing by himself in private than when leading a full orchestra in concert, perhaps the roll is best viewed as riffing rather than a stylistic guide left for posterity. Although after the playback Mahler wrote in the studio guest book: "In astonishment and admiration," his reference may have been to the wonder of the technology rather than to the artistic value of the result. (Incidentally, while the homogeneous, staccato playing of standard piano rolls, often corrupted with extra notes, have a deservedly poor reputation for fidelity, Mahler's were cut in the Welte-Mignon process that recorded not only the notes but their nuance and provides an uncannily accurate reproduction of the original quality. Reportedly, he was well paid for his single afternoon effort, but the rolls sold only a few copies; in addition to requiring the purchase of a costly reproducing mechanism, they were prohibitively expensive for mass distribution - $14.50 in America.) Our next best evidence of Mahler's own style is equally confounding - the utterly irreconcilable recordings of the Fourth left by his two primary acolytes. Mahler considered Willem Mengelberg to be the finest interpreter of his work. As head of the famed Concergebouw Orchestra, Mengelberg was an ardent advocate, conducting Mahler's symphonies at hundreds of concerts through the years. The first successful presentation of the Fourth was when Mahler conducted the work - twice - at an October 1904 Concertgebouw concert! Mengelberg's copy of the score is a uniquely valuable document, packed with annotations added during the rehearsals, in both his and Mahler's hands, including metronome markings, expressive phrasing and explanations of the composer's wishes. A recording of a 1939 Mengelberg Concertgebouw concert is the most heavily-inflected of all, utterly fascinating in both its detail and its overall thrust. The very outset is startling, as Mahler's poco rit. (slight slowing) at the third measure becomes a hugely suspenseful grand pause before gliding into a breathtakingly smooth transition to the first theme. The sonic quality is fine and the soloist, Jo Vincenis appropriately earnest yet ingenuous. Bruno Walter was Mahler's assistant conductor and foremost protégé. Upon leaving the Vienna Opera, Mahler wrote to him; "I know of no one who understands me as well as I feel you do and I believe I have entered deep into the mine of your soul." Indeed, Mahler relied upon Walter to explain the Fourth to critics. In his biography of Mahler, Walter described the Fourth as dreamlike and unreal, a fairy tale of airy imponderability and blissful exaltation. His 1945 New York Philharmonic recording of the Fourth is a world apart from Mengelberg's interpretation - deeply humanistic, with an utterly natural flow, shorn of even a hint of exaggeration and, at 50 minutes, the swiftest on record, yet with no sense of being rushed. As an added touch of authenticity, his soloist is Desi Halban, daughter of Selma Kurtz, who had studied and performed extensively with the composer. We also have several later Walter concerts, including a 1953 NY Philharmonic outing with sharp details and accents, and culminating in a mesmerizing 60-minute 1960 Vienna Philharmonic version (the slowest on record!) that's all tender grace and lilting elegance. Timings aside, all the Walter versions share a deeply humanistic vision, shorn of even a hint of personal intrusion or exaggeration. There are a lot of them - as compiled by the mahlerrecords.com website, of the first 20 known recordings of the work (including concerts), Walter led nine! The very first recording of the Mahler Fourth – and, indeed, only the second of any Mahler symphony – came in May 1930 from a most improbable source: the New Symphony Orchestra of Tokyo, led by its founder Hidemoro Konoye, a pioneer in bringing Western classical music to Japan (where it flourished). Although he later would record stylish Haydn and Mozart with the Berlin Philharmonic in 1938 (as well as the noxious Horst Wessel Lied in a gesture apparently intended to show wartime Axis solidarity), Konoye's Mahler, despite occasional felicitous touches, is hugely disappointing, with painfully poor playing, a tremulous soprano and uninspired leadership. Disfiguring cuts further compromise the impact of the adagio climax and the soft end of the finale. The overall result is more an historical curiousity than a trail-blazing satisfying musical experience. Both Mengelberg's and Walter's credentials are above challenge. But which is the more reliable measure of Mahler's own approach, if either? In populating the vast spectrum between their divergent styles, most conductors favor Walter's ethereal bliss over Mengelberg's emphatic individuality. Among the few subsequent exponents of the Mengelberg approach is Simon Rattle, whose recording with the Birmingham Symphony (1998, EMI) is personal, probing and full of alluring hues and rhythms that often depart from the score but always seem within the overall spirit of the piece. Unlike the rest of his current live DG Berlin Philharmonic Mahler cycle, Claudio Abbado's Fourth is more affected than effective, often eluding the pervasive tenderness for quirky emphases, from a violent first movement climax through prominent middle voices to exaggerated word-painting in the finale. The mood is extended, though, with a fitting and generous bonus - Alban Berg's Seven Early Songs, inspired by both nature and Mahler. Many classic objective accounts with relatively sharp detail, steady tempos and subtle nuance tend to run aground by either attenuating the spooky, biting terror Mahler intended with the scordatura solo violin of the second movement or by using a full-blown operatic voice that drains the concluding song of its winsome naiveté. Even so, among my favorites are Otto Klemperer and the Philharmonia (1961, EMI), who boast prominent winds and divided violins for added clarity of counterpoint; Fritz Reiner and the Chicago Symphony (1958, RCA), who play with icy precision (including an especially chilling scordatura) but seem bit brittle; Evgeny Svetlanov and the Russian State Symphony Orchestra (1996, Russian Season) who lull us before a shockingly powerful third movement climax and a deeply mysterious concluding song; Jascha Horenstein and the London Philharmonic (1970, Classics for Pleasure), who match the autumnal pace of Walter's final concert with a fascinating balance of monumental mysticism and heartfelt charm; and John Barbirolli and the BBC Symphony (1967, BBC) who invest the variations with an unsettling, searching questions. Other acclaimed accounts, but without distinctive features, include those of George Szell and the Cleveland Orchestra (1965, Sony), Michael Gielen and the SWR Sinfonieorchester Baden-Baden und Freiburg (generously coupled with Fritz Schreker's colorful Prelude to a Drama, yet presenting outrageously incongruous Van Gogh cover art, 1988, Hänssler) and Pierre Boulez and the Cleveland Orchestra (1998, DG), whose precision seems a bit too clinical, even while fostering appreciation for the splendor of Mahler's orchestration. Of the two recordings by Leonard Bernstein, the foremost Mahler specialist of his time, his 1960 New York Philharmonic reading (Sony) is vibrant, vital and intensely human, and has the inspired solo choice of Reri Grist, who sang "Somewhere" so affectingly in the original cast of his West Side Story. His 1987 live Concertgebouw remake (DG) is even more deeply-felt but crashes in the finale with the disastrous use of a boy soprano, whose literal depiction ruins the essential artistic illusion and undermines the fundamental allure of adults pining for an innocence irretrievably lost, except in our dreams - or in our music. Copyright 2006 by Peter Gutmann For a note about the illustrations, please click here. copyright © 1998-2006 Peter Gutmann. All rights reserved.
<urn:uuid:dae8e45e-5dba-4d83-99bf-ee368f251c0b>
CC-MAIN-2016-26
http://www.classicalnotes.net/classics/mahler4.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00068-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961808
3,414
2.765625
3
Julian D. Olden Examines ecological and social issues associated with water resources as human populations increase and climate warms. Water will be the Oil of the 21st century and beyond- the invaluable commodity that determines the wealth of nations, and the health of humans and the freshwater ecosystems upon which we depend. Mark Twain once said "Whiskey is for drinking; water is for fighting over." We all know too well the importance of clean, fresh water; but do you know the real reasons why water shortages have led to environmental degradation and intense social conflicts throughout the globe? Many of the most dangerous human diseases are water-borne; how are society's actions exacerbating these? Why is the biodiversity of freshwater ecosystems the most imperiled on the planet? Is Seattle really a 'wet' place or are we running out of sustainable water supplies? This course will examine these and many related questions to improve our understanding of human dependencies and effects on freshwater ecosystems. Student learning goals As a result of this course, students will have a strong understanding of the tight linkages between water, the environment, and human society. Specifically, this course aims to i) introduce students to contemporary issues and challenges in freshwater ecology and resource management; ii) develop student’s skills to critically evaluate scientific information; iii) develop student’s writing skills to effectively communicate issues to a variety of audiences; iv) increase awareness that human existence depends on a supply of clean and abundant water; and v) explore ways that individuals and society can reduce their impacts on water resources. General method of instruction Class assignments and grading
<urn:uuid:db85ef3e-a3b2-4124-9299-96cceb5feb63>
CC-MAIN-2016-26
https://www.washington.edu/students/icd/S/fish/101olden.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00002-ip-10-164-35-72.ec2.internal.warc.gz
en
0.908003
327
3.40625
3
Toronto, August 29, 2013 - Going away to college or university marks a milestone in the lives of young adults and their parents. For the child it’s an exciting time to develop new relationships, take on additional responsibility, and develop their own identity. For parents it can be a time when they begin to ask themselves, ‘what role do I play?’ Hearing stories of drinking, bullying and property damage at school can worry parents about how to keep their kids safe. Dr. David Wolfe, Director of CAMH’s Centre for Prevention Science has developed a tip sheet to help parents better communicate and build healthy relationships with their children as they transition. Ten tips for parents - Be a parent (not a friend). While growing up children learned to depend on you for mature advice and guidance. Continue this role, and step back a bit from needing to know everything in their life. - Don’t intrude. Let them make new friends, while knowing you’re still a major part of their life. Try to resist the temptation to contact them too often, through emails, text messages, Facebook and phone calls. Let them take the lead. - Don’t pressure. Parents are sometimes too eager to see their kids find their niche, settle their plans, and reach their goals. This can come across to them as pressure or demands. Finding their niche takes time, and it’s their time and their life. - Avoid “helicopter parenting”. Some say today’s parents are more hovering and protective than previous generations, which can make the process of transition difficult for some who are used to daily contact with parents. While it is important that you provide ongoing support and remain involved and interested in your child’s life, you must be willing to back-off and let them grow. - Encourage new ideas. College and university is a time to explore new options and be exposed to new possibilities, so encourage them to investigate new courses and interests, even if it could mean a change in focus or delay in completing their degree. In the long run this is time well spent, for they will have chosen a career that is best for them. - Be supportive. It is important for students to feel supported, but still in charge. Students who learn to manage the tension and worry associated with academic and social changes end up more successful and well-adjusted. Your role involves listening and guiding, not directing, cajoling, or pressuring. - Encourage friendships and connection. Students who develop meaningful relationships with peers have fewer emotional and physical symptoms of stress. Encourage them to try new interests, develop new friendships, and go to new places – even if they’re a bit uncomfortable. Encouraging connection is especially important if your child lives at home. - Be a touchstone of maturity and good advice. In an effort to make friends and fit in (or to cope with stress and anxiety), some students engage in excessive drinking, drug use, promiscuous sexual activity, and other health-compromising activities. Rather than telling your child what they can or cannot do, let them know what you expect of them, how proud you are of their efforts, and that you are available if they need advice - Assist with time and money management. Many students are ill-prepared at managing their time or their finances, which contributes to their stress. Many students also have credit cards and amass sizable debt, yet they may not have a good understanding of how to manage it. Resist the temptation to reduce stress by giving money – remind them of their choices and help them plan a budget. - Recommend academic and student counseling resources. If your child seems to be struggling, the first line of defence may be to have them speak to a counselor. Academic or mental health counselors can help students find the right courses, learn better study habits, and can also assist with all other aspects of health and well-being, including therapies to improve coping skills and strengthen relationships and connections. Dr. Wolfe has been pioneering new approaches to preventing many societal youth problems such as bullying, relationship violence, and substance abuse, and strongly advocates that forming healthy relationships with children and adolescents should be a public health priority. He has developed a multi-grade curriculum called The Fourth "R" on forming healthy relationships, which is currently being used throughout Canada and the U.S. resources for parents: To schedule an interview with Dr. Wolfe, please contact Michael Torres, CAMH Media Relations, 416-595-6015; or by email at [email protected] The Centre for Addiction and Mental Health (CAMH) is Canada's largest mental health and addiction teaching hospital, as well as one of the world's leading research centres in the area of addiction and mental health. CAMH combines clinical care, research, education, policy development and health promotion to help transform the lives of people affected by mental health and addiction issues.CAMH is fully affiliated with the University of Toronto, and is a Pan American Health Organization/World Health Organization
<urn:uuid:7c391497-530e-46f7-93a5-825758ace72e>
CC-MAIN-2016-26
http://www.camh.ca/en/hospital/about_camh/newsroom/news_releases_media_advisories_and_backgrounders/current_year/Pages/CAMH%E2%80%99s-back-to-school-basics.aspx
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396872.10/warc/CC-MAIN-20160624154956-00069-ip-10-164-35-72.ec2.internal.warc.gz
en
0.957033
1,071
2.828125
3
In medical schools and neuroscience laboratories around the world, researchers use maps and diagrams of the human brain to teach the fine anatomy of the cerebral cortex. One of the most popular of these is a diagram drawn by German neurologist Korbinian Brodmann back in 1909, detailing the cellular composition of 52 areas in the brain. “It’s only recently that anatomic approaches have become popular again,” said Peter Sterns, a senior editor at Science who oversaw the publication of the paper, at a press conference on Wednesday. “Researchers have begun to realize that without a really deep knowledge of structures involved, we will never have an understanding of the data being produced by many of these other techniques.” “BigBrain is the first ever brain model in 3-D that really presents a realistic human brain with all the cells and all the structures,” said study co-author Karl Zilles, a neuroscientist at the University of Düsseldorf and Research Center Jülich in Germany. The model is the product of over 1000 hours of labor simply to collect the data, according to the authors. The brain of a 65-year-old woman was sliced into 7404 sections, each only 20 microns thick—like a flimsy piece of plastic Saran Wrap. Each section was then mounted on a slide, stained to visualize cellular structures, and scanned into a computer. Then that data was digitally reassembled into a 3-D object, including both manual and digital repairs to fix image defects such as rips, tears, folds, and distortions. The final reconstruction is on the order of a terabyte of data, said lead author Alan Evans of McGill University in Montreal, Canada, whose team produced software tools to allow researchers to explore the data. “It is the equivalent of something over 100 times larger than a typical MRI [scan] volume.” “We can now answer questions that can not be addressed using previous brain models because they require resolution at the cellular level,” said Zilles. The new resolution is powerful enough to visualize individual large neurons, such as those 120 micrometers in diameter. Researchers will also be able to use the new atlas as a scaffold to pile on additional data of their own, such as the distribution of receptors in the brain, to learn more about brain function and dysfunction, said Zilles. “It is a common basis for scientific discussions because everybody can work with this brain model and we can speak about the same basic findings,” he said. The BigBrain is free and publically available to the research community. 1. Amunts, K., C. Lepage, L. Borgeat, H. Mohlberg, T. Dickscheid, M.-Ã. Rousseau, S. Bludau, P.-L. Bazin, L. B. Lewis, A.-M. Oros-Peusquens, et al. 2013. BigBrain: An Ultrahigh-Resolution 3D human brain model. Science 340(6139):1472-1475.
<urn:uuid:e10284a2-10a3-4d77-8b91-a24a38c6fd70>
CC-MAIN-2016-26
http://www.biotechniques.com/news/Brain-Anatomy-Gets-3-D-High-Resolution-Makeover/biotechniques-344293.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00036-ip-10-164-35-72.ec2.internal.warc.gz
en
0.930264
643
3.328125
3
Engineered Molecules for Smarter Medicines Specially designed polymers can dodge the body’s immune defenses to deliver vital medicine where it is needed most. Sidebar: Tripping the Light Fantastic Alongside materials that respond to changes in temperature is another category of intriguing smart polymers whose changes are triggered instead by light. Because light is a clean stimulus that allows remote control without physical contact, these photo-responsive polymers are useful for a wide variety of applications, including smart optical systems, micro-electrochemical systems, sensors, textiles, coatings, and solar cells. One area receiving attention now is photo-responsive smart materials that change their color in response to light. In our laboratory we are currently investigating a particular family of polymers that incorporate an organic compound called spiropyran. This molecule has ring-shaped structures that open and close when exposed to certain wavelengths of light—or, in some cases, when exposed to changes in acidity. Cleavage of the ring at the site of a specific carbon-oxygen bond creates a chromophore (color-producing region) that strongly absorbs visible light. Like other smart materials discussed in the article, this polymer holds promise for biomedical application in the encapsulation and targeted release of drugs. In a completely different application, the textile industry uses photo-responsive polymers woven into apparel to produce garments that change their color or reveal a print pattern in the sunlight, or fibers that act as an indicator of over-exposure to the sun—a feature that might be appreciated, for example, in children’s clothing. Smart materials can be designed to conduct electrons in response to the absorption of electromagnetic radiation. Instantaneously, an electron can be raised to a higher energy level. The electron then relaxes from the high energy state to the ground state as it loses a photon. Smart materials are also being used to harvest sunlight for photovoltaic systems. At present, however, these polymers do not perform as efficiently as the semiconducting materials used in the computing industry. However, polymers are considered the next-generation material for photovoltaic devices because they are so inexpensive to manufacture. Research is under way to improve the performance of polymer-based solar cells. Ideally, the polymer models would respond to a broad range of light (infrared to visible wavelengths), increasing their efficiency to (for example) power electronic devices. Working toward this goal, our group has an ongoing interest in developing materials known as paraphenylene oligomers. Experimental and theoretical studies carried out in our group show that tagging those molecules with additional molecular groups causes the photo-responsive polymers to absorb longer, redder wavelengths of light. These materials could be tuned to respond to an even broader range of light, we find, by adding a strong electron acceptor group. Polymers containing transition metals such as ruthenium are particularly well suited as antennae for attracting light, a feature that facilitates its absorption and electron transfer (and/or energy transfer) within the material. Our group, in collaboration with colleagues at the University of North Carolina–Chapel Hill, has been using ruthenium in combination with a polymer to form a hybrid system (such as ruthenium (II) polypyridyl derivatized polystyrene) that looks particularly promising. This line of investigation should yield a better understanding of how electron and photon energy are transported within the photo-responsive polymers. Our latest studies suggest that the relative spatial arrangement between the ruthenium ions in the polymer is a key parameter influencing the way it transports electrons through the system. Such discoveries are bringing us closer to developing polymer-based solar cells that can compete with coal- and natural gas–fired power plants. Understanding the internal physics of light-absorbing polymers is essential to making further advances. Although conventional, silicon-based photovoltaic arrays are already in widespread use, polymer-based solar devices would offer significant advantages. Polymers are inexpensive to develop, easily scaled to large manufacturing quantities, lightweight, and adaptable to a wide variety of design criteria. Refining and commercializing this technology would benefit not only the solar panel industry but also the textile industry, as it becomes possible to weave electron-conducting smart materials into fabrics. In fact, the two industries may someday come to overlap. Incorporating light-responsive smart materials into textiles could transform the way we interact with our electronics, perhaps leading ultimately to devices that can be charged by contact with our clothing. - Al-Ahmady, Z. S., et al. 2012. Lipid–peptide vesicle nanoscale hybrids for triggered drug release by mild hyperthermia in vitro and in vivo. American Chemical Society Nano 6 :9335–9346. - Andersson, J., S. Li, P. Lincoln, and J. Andreasson. 2008. Photoswitched DNA-binding of a photochromic spiropyran. Journal of the American Chemical Society 130:11836–11837. - Boutris, C., E. G. Chatzi, and C. Kiparissides. 1997. Characterization of the LCST behaviour of aqueous poly(N-isopropylacrylamide) solutions by thermal and cloud point techniques. Polymer 38:2567–2570. - Chen, K.-J., et al. 2013. A thermoresponsive bubble-generating liposomal system for triggering localized extracellular drug delivery. American Chemical Society Nano 7:438–446. - Fang, Z., et al. 2013. Inorganic Chemistry 52:8511–8520. - Ipe, B. I., S. Mahima, and K. G. Thomas. 2003. Light-induced modulation of self-assembly on spiropyran-capped gold nanoparticles: A potential system for the controlled release of amino acid derivatives. Journal of the American Chemical Society 125:7174–7175. - Jeong, B., and A. Gutowska. 2002. Lessons from nature: Stimuli-responsive polymers and their biomedical applications. Trends in Biotechnology 20:305–311. - Le, K., L. B. Chand, C. Griffin, A. L. Williams, and D. K. Taylor. 2013. Tetrahedron Letters 54:3097–3100. - Leppert, P. C., T. Baginski, C. Prupas, W. H. Catherino, S. Pletcher, and J. H. Segars. 2004. Comparative ultrastructure of collagen fibrils in uterine leiomyomas and normal myometrium. Fertility and Sterility 82:1182–1187. - Matchar, D.B., et al. 2001. Management of uterine fibroids. Evidence Report/Technology Assessment (Summary), United States Public Health Service, 1–6. - Mather, P. T. 2007. Soft answers for hard problems. Nature Materials 6:93–94. - Minkin, V. I. 2004. Photo-, thermo-, solvato-, and electrochromic spiro heterocyclic compounds. Chemical Reviews 104:2751–2776. - Nirmal, H.B., S. R. Bakliwal, and S. P. Pawar. 2010. In-Situ gel: New trends in controlled and sustained drug delivery system. International Journal of PharmTech Research 2:1398–1408. - Roy, D., J. Cambre, and B. Sumerlin. 2010. Future perspectives and recent advances in stimuli-responsive materials. Progress in Polymer Science 35:278–301. - Scarmagnani, S., C., et al. 2010. Photoreversible ion-binding using spiropyran modified silica microbeads. International Journal of Nanomanufacturing 5:38–52. - Taylor, D. K., F. L. Jayes, A. J. House, and M. A. Ochieng. 2011. Temperature-responsive biocompatible copolymers incorporating hyperbranched polyglycerols for adjustable functionality. Journal of Functional Biomaterials 2:173–194. - Taylor, D. K., and P. C. Leppert. 2012. Treatment for uterine fibroids: Searching for effective drug therapies. Drug Discovery Today:Therapeutic Strategies 9:e41–e49.
<urn:uuid:8f345b78-5758-41a1-b8ac-76eeff8cc927>
CC-MAIN-2016-26
http://www.americanscientist.org/issues/feature/2014/2/engineered-molecules-for-smarter-medicines/5
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783400031.51/warc/CC-MAIN-20160624155000-00047-ip-10-164-35-72.ec2.internal.warc.gz
en
0.877398
1,781
3.3125
3
Bokodes are Imperceptible Visual Tags for Camera Based Interaction from a Distance. Put more simply, they're a visual data tag usually only 3mm wide but capable of holding thousands of times more information than a standard barcode. They can be read from a distance, up to a few meters away, by any standard digital camera including those built into about a billion mobile phones around the world which give this technology great potential for growth. They're so small that they appear as a tiny dot to the human eye or to a camera in sharp focus but an out of focus camera lens will see thousands of bits of information. The name 'Bokode' comes from an amalgamation of the words 'bokeh' (a Japanese photographic term describing image blur or an out of focus area in a photograph) and 'barcode'. Bokodes, not spelt bocodes or borecodes, were developed by a team at the MIT Media Lab (and wiki) who saw an opportunity to upgrade the standard barcode, which is relatively large, limited in the information it can carry and which can only be read from a short distance, to one that could help develop a new and more flexible interface between us and machines through visual sharing of information. What the camera sees, the Bokode is the center object How do they work? The pattern in a Bokode is a tiled series of data matrix codes containing thousands of bits of information that are almost invisible to the naked eye. The lens causes the pattern to spread and become readable to a digital camera. When the camera is pointed at a Bokode it only sees a small portion of the Bokode information at a time but the data is encoded in such a way that the camera knows it's relative position to the Bokode, more info.... What are the main advantages? - Can contain far more information than a standard barcode - Less obtrusive, classic barcodes are larger and take up more space on packaging - More private than RFID Radio frequency identification which can be read at a distance by any equipment that can receive radio signals - Can be read from a distance of a few meters by any digital camera - Can contain a variety of useful information - May lead to a new type of flexible interaction between machines and the human world - Can be used in multiple contexts including education, business presentations, libraries, shops, gaming and product tracking in factories So how soon could Bokodes be used to replace barcodes? That depends on cost, currently Bokodes require a lens, LED and power source and cost around $5 to produce, but reflective Bokodes, like the hologram on a credit card, would only cost around 5c. The team has passive prototypes already in development. Rewritable Bokode are called Bocodes.
<urn:uuid:197564de-f231-41b4-99b6-31de20d01ec0>
CC-MAIN-2016-26
http://www.bokodes.org/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398216.41/warc/CC-MAIN-20160624154958-00042-ip-10-164-35-72.ec2.internal.warc.gz
en
0.938811
579
3.53125
4
HIV / AIDS The scientific challenges of an HIV/AIDS vaccine The development of a safe and effective AIDS vaccine is scientifically challenging on several fronts. An ideal vaccine must elicit immune responses capable of blocking infection by sexual, intravenous, and mother-to-child transmission. It may also need to be capable of stimulating immune responses such as antibodies that are effective in neutralizing free virus particles, as well as cellular immune responses, which destroy virus-infected cells. The induction of mucosal immunity is also being explored. Meanwhile, the tremendous geographic diversity of HIV subtypes worldwide suggests that mixtures or “cocktails” of vaccines may be required for universal protective immunity. There is a lack of understanding of which anti-HIV immune responses are required to generate protective immunity against HIV and which components of the virus are necessary for an effective AIDS vaccine. Despite these challenges there is broad agreement within the scientific community that an effective AIDS vaccine is possible. This optimism is based on the knowledge, firstly, that a small but growing number of people have been repeatedly exposed to HIV but have remained uninfected; they have elicited anti-HIV immune responses that could explain their resistance to infection. Secondly, there are now several candidate vaccines that have protected monkeys from infection and/or disease caused by the simian immunodeficiency virus (SIV) or the chimeric SIV/HIV (SHIV), carrying the HIV envelope; while most of these experimental vaccines did not provide complete protective immunity they were effective in significantly reducing viral loads and progression to disease in vaccinated monkeys. Thirdly, some candidate vaccines already in clinical trials have induced strong anti-HIV immune responses in human volunteers. Finally, vaccines have been successfully developed against several other viruses – measles, mumps, rubella, polio, hepatitis B and rotavirus, for example – with much less knowledge of their fundamental biology and pathogenic mechanisms than HIV.
<urn:uuid:03364660-af97-44ca-832f-f8f9c71bc478>
CC-MAIN-2016-26
http://www.who.int/immunization/topics/hiv/en/index2.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962401
393
3.859375
4
If you're looking to add variety and a new challenge to your exercise routine, introduce yourself to stair exercise. Stair exercise allows you to strengthen your lower body, while improving your health, weight and fitness level. If you have a staircase, you don't even have to leave the house to get a good workout. Also, you don't have spend a lot of time exercising, because even climbing stairs for for short durations throughout the day has benefits. Burn Those Calories The exact number of calories you burn climbing stairs depends on your age, body composition and intensity level. You could burn about 150 calories in just 10 minutes of climbing stairs. Harvard Health Publications states that exercising for 30 minutes on a stair step machine, also known as a stair climber machine, will burn 180 calories for a 125-lb. person, 223 calories for a 155-lb. person and 266 calories for a 185-lb. person. Even short burst of stair climbing can improve your health. By taking the stairs for 13.5 minutes each day, participants in a study published in the April 2000 issue of Preventative Medicine improved their heart rates, breathing ability and HDL cholesterol levels. In this study, participants climbed 199 stairs, six times a day, to accumulate the 13.5 minutes of stair-climbing exercise. In other words, even climbing a few flights of stairs several times a day while work, home, or when running errands, can significantly improve your health and help you lose weight. Consult your physician prior to committing to any form of exercise program. The light to moderate intensity of stair climbing can cause stress on the joints in the lower back and legs, advises Pierce. When climbing stairs, place your foot fully on each step and lean slightly forward to lower the stress placed on your back, hips and knees. Do not hunch your back or use the hand-railings for support or you will decrease your exercise intensity and place stress on your back. If, at anytime, you feel faint, dizzy, short of breath or nauseous while doing stair exercise, stop immediately. Advantages of Stair Climbing When you take the stairs, you give yourself many advantages. Even taking two flights of stairs every day will provide you with a free workout that could help you to lose 5.94 lbs. a year, improve your heart health, reduce your risk of osteoporosis, lower your risk of death, boost your confidence and relieve some stress or tension, according to the Columbia University Medical Center. Plus, it often takes longer to wait for an elevator than it does to take the stairs. You can add variety to your stair exercise to keep it fun and interesting. If you use a stair-climbing machine, try varying your step speed, adding resistance or using a preprogrammed workout. When using a staircase, try walking up the stairs and running back down, stepping wide up the stairs as if you were skating up the stairs, jumping up the stair, walking up the stairs sideways or take two steps at a time, advises Fitness Magazine. If you do not have access to a stair climber or a staircase, you can get your stair exercise by going up and down on one step or by using a sturdy crate or stool as a step. - Dr. Patricia Pierce; Professer of of Exercise and Rehabilitative Sciences at Slippery Rock University; Slippery Rock, Pennsylvania - harvard Health Publications: Harvard Medical School: Calories burned in 30 Minutes for People of Three Different Weightsrel="nofollow" - "Preventive Medicine"; Training Effects of Accumulated Daily Stair-Climbing Exercise in Previously Sedentary Young Women; Boreham CA, Wallace WF, and Nevill A.; April 2000rel="nofollow" - Fittness Magazine: Climb Away 150 Calories in 15 Minutesrel="nofollow" - Columbia University Medical Center: Top 10 Reasons to Take the Stairsrel="nofollow" - Jupiterimages/Goodshoot/Getty Images This article reflects the views of the writer and does not necessarily reflect the views of Jillian Michaels or JillianMichaels.com.
<urn:uuid:9aeb99b5-9ff9-4606-b52b-ebbb71aa666a>
CC-MAIN-2016-26
http://livewell.jillianmichaels.com/stair-exercise-4652.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399425.79/warc/CC-MAIN-20160624154959-00109-ip-10-164-35-72.ec2.internal.warc.gz
en
0.912568
857
2.53125
3
|Enter your zip to find stuff near you| This classified ad has expiredClick here to view current advertisements similar to this one. Teaching Good Decision Making Skills - A Guide To Teaching Your Child Safe Decision Making This Amazing Course Will Blow Your Results Out Of Proportion! “Master These Ultimate Decision Making Techniques And Watch Your Children Development Soar In A Fraction Of The Time!” Save Hundreds Of Hours Blindly Chasing Results By Tapping Into These Mind-Blowing Secrets To Your Children’s Decision Making Which Will Skyrocket Your Results Quickly! Teaching a child how to make good decision will help the child to view things in a better and broader light and this will also help to make the child a more thinking individual as the experience becomes easier each time. Get all the info you need here. Heres an overview of this ultimate Decision Making booster manual: With these boosting Decision Making strategies, you'll start to achieve results faster than ever! Also, big and difficult tasks suddenly become easier for your children because they know how to chunk them down. On top of that, you'll be getting special techniques and psychology tools for getting massive results fast! Master The Skill Of Boosting Your Children Decision Making And Achieve Confidence Like Never Before Grab Your Package with MRR – Click Here!
<urn:uuid:6af2afff-a5b3-438c-ba87-f2009ce536ab>
CC-MAIN-2016-26
http://www.usfreeads.com/3124653-cls.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00106-ip-10-164-35-72.ec2.internal.warc.gz
en
0.858282
273
2.515625
3
The holes that formed in the ozone layer over Antarctica in 2011 and 2012 are a study in contrasts. The 2011 hole (top left) ranked among the ten largest recorded since the 1980s, while the 2012 hole (top right) was the second smallest. Why were they so different? Is it a sign that stratospheric ozone is recovering? These are the questions NASA scientists Anne Douglass, Natalya Kramarova, and Susan Strahan asked as they examined the holes using data from instruments on NASA’s Aura and NASA/NOAA’s Suomi NPP satellites. The images above represent the typical method of gauging the ozone hole. They show the extent (the geographic area covered) and the depth (the concentration of ozone from top to bottom in the atmosphere) as measured by Aura’s Ozone Monitoring Instrument. Blues and purples represent the lowest ozone levels. Each image shows the day of maximum extent—when the ozone hole was largest that year. But the view of area doesn’t tell the whole story, said Douglass. It says nothing about the chemistry or atmospheric dynamics that give the hole its shape. And if we don't know why the size and depth of the hole varies, it is impossible to know if policies meant to reduce ozone depletion (such as the Montreal Protocol) are having an impact. 2011 and 2012 offer prime examples. The Antarctic ozone hole forms in the southern spring when chlorine and other ozone depleting chemicals interact with sunlight to destroy ozone. It would be easy to assume that a larger ozone hole means more chemicals were present, but the real picture is more complicated. “2011 would have had less ozone even without ozone depleting chemicals,” said Strahan. Stratospheric ozone is naturally produced in the tropics and transported to the poles. In 2011, winds blew less ozone to Antarctica so there was less to destroy. Strahan also found less chlorine in the atmosphere over Antarctica in 2011 than in other years, but because there was less ozone, a large hole developed. In 2012, ozone depletion in the lower atmosphere was severe, said Kramarova. But in early October of 2012, winds blew in more ozone at higher levels, above the depleted area. The high-level ozone masked the destruction at lower altitudes, and so the hole looks small in the OMI image. All of this means that the size of the ozone hole is not the only indicator of how well policies to control ozone-depleting chemicals are working. “Ozone holes with smaller areas and a larger total amount of ozone are not necessarily evidence of recovery attributable to the expected chlorine decline,” said Strahan. “That assumption is like trying to understand what’s wrong with your car’s engine without lifting the hood.” In fact, the fluctuating size of the ozone hole has not been tied to chlorine concentrations since the 1990s, as shown in the two graphs above. The first graph depicts chlorine concentrations, and the second shows ozone hole size over time. In the 1980s, ozone hole area increased in step with chlorine concentrations, but that relationship broke down in the 1990s. The atmosphere became saturated with chlorine, and the additional chlorine did not have enough ozone to react with. Adding more chlorine in these conditions no long increases ozone depletion, and so the size of the ozone hole was no longer directly related to chlorine concentrations. Since the 1990s, the ozone hole area has been controlled entirely by weather. The chemicals that destroy ozone are so long-lived that Douglass, Strahan, and Kramarova don’t expect to see the impact of the Montreal Protocol until about 2025 when chlorine levels drop below saturation. Full recovery should occur sometime between 2058 and 2090, based on projections of levels of ozone-depleting gases and their break-down and transport. - NASA (2013, December 11) NASA reveals new results from inside the ozone hole. Accessed December 12, 2013. - NASA (2012, October 24) 2012 Antarctic ozone hole second smallest in 20 years. Accessed December 12, 2013. - NASA Earth Observatory (2011) World of Change: Antarctic Ozone Hole.
<urn:uuid:a738787e-b02e-47be-91ff-451cc86a5990>
CC-MAIN-2016-26
http://visibleearth.nasa.gov/view.php?id=82596
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397865.91/warc/CC-MAIN-20160624154957-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.952956
862
4.3125
4
The Penny Game provides a good introduction to the elements and numbers in the federal budget. The activity works well for all ages, especially elementary, middle, and high school students. Here you can download and print the full exercise, including paper Income and Spending Boards. Players distribute pennies to represent their expectations for how the government raises and spends its money, and compare their expectations to the actual distributions from the previous year. This process provides each player with an understanding of how the country balances its priorities and may reveal some common misconceptions about that balance. When using the Income Boards, each bean/penny represents almost $37 billion, which comprises 1% of the 2015 federal government taxes collected and money borrowed. Figures will not be exact due to rounding. There are 88 white beans/pennies and 12 of another color because the government collected 12% less than it spent in FY2015. The income figures are represented as a percentage of outlays. FY2015 income was 88% of outlays, which is another way of saying we had an 12% deficit. When using the Spending Boards, each penny represents 1% or approximately $37 billion of federal spending. The FY2015 deficit was $439 billion. The 88 white beans or bare pennies represent the amount of federal taxes collected and spent in FY2015. The 12 red beans or covered pennies represent an additional amount the federal government borrowed and spent in FY2015. - Make copies of Income and Spending Boards using cardstock or colored paper. - Prepare a bag of 100 pennies or beans for each team. Each bag should contain 88 white beans and 12 red beans. If using pennies, leave 88 pennies bare and cover 12 pennies with red tape. How to Play: - Group players into teams of 4 or 5 - Give each team a Penny Bag and an Income Board. - Ask teams to distribute the 88 pennies onto the 4 tax squares of the Income Board according to where students think the taxes came from. - When the Income Board is completed, give the correct answers as shown on the chart. - Distribute the Spending Boards. - Ask teams to distribute all 100 pennies (each representing $37 billion) among the 9 spending categories according to where they think the government spent the money in 2015. - Read out the answers and ask each team to move the correct amounts onto the squares so they can visualize the comparisons. - Make boards and answer sheets into overheads for use with large groups. Try this game at a meeting or have your students lead it in other classrooms. - Health includes Medicare, Medicaid, safety/health inspections, Affordable Care Act insurance subsidies, and veterans health programs. - Income Security includes unemployment compensation, housing assistance, food stamps, nutrition programs, general retirement and disability insurance (excluding Social Security), and other income security programs. - Education counts all Department of Education outlays; and job training, employment and social services. Keep in mind that most education spending is from the state and local level, not from the federal government. - International Affairs (originally called Foreign Aid) includes development and humanitarian assistance, international security assistance, conducting foreign affairs, foreign information and exchange programs, and international financial programs. - Other includes homeland security; science, space, and technology; National Institutes of Health; energy; agriculture; commerce and housing credits; health-related research support; postal service; deposit insurance; transportation; community/regional development and disaster relief; veterans benefits and services (except health benefits); justice; and general government. - Due to rounding, actual figures may not add up perfectly.
<urn:uuid:5e822c13-da59-42aa-8baa-b358db9ce964>
CC-MAIN-2016-26
http://www.concordcoalition.org/act/tools/penny-game
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00187-ip-10-164-35-72.ec2.internal.warc.gz
en
0.924944
740
3.9375
4
This claim is based on the Hadley Center report which showed a rise in average temperature of 0.02°C per decade between 1998 and 2008. It has since been widely publicized by climate-change sceptics and wrongly interpreted as a sign that global warming has stopped. But this set of statistics did not include the Arctic, where temperatures have risen significantly in recent years. The 21st century has seen the largest number of temperature records broken: 2014 is the hottest year to date since 1850, soon to be overtaken by 2015. Temperature fluctuations from one year to the next may be linked to changes in solar activity, which varies following a cycle of around eleven years. However, the amount of solar energy released varies by no more than 0.1%. As Jean Jouzel, Vice-Chair of the IPCC Working Group I, explains, “If the sun governed global warming, the entire atmospheric column would be affected. Yet we are experiencing a warming of the lower layers and a cooling of the stratosphere. This clearly indicates the role played by the worsening greenhouse effect.” Climate models are predictions and as such cannot be perfect. Nevertheless, over the years, scientists have fine-tuned them to gain a fairly accurate picture. The relevance of these models has also been tested on past climate patterns. For if they are borne out by past events, then they are the right way to predict future climate patterns. As a result, the models used are largely reliable, with a slight discrepancy between predictions and observations . The uncertainty in these models is linked to unpredictable events such as volcanic eruptions or solar activity. But in spite of these intermittent events, long-term climate developments closely match the predictions made using climate models. In reality, the IPCC co-authors are not paid at all. The organization has only 30 permanent staff, compared with 831 voluntary authors (selected from among 3000 candidates). These volunteers must devote the equivalent of four to five months of work to the report, in addition to their own research. It is therefore work that relies on the goodwill of the scientific community. Moreover, the authors come from all over the world and are often replaced (69% turnover of authors from the 4th to the 5th report) to promote the exchange of opinions and new ideas. There is no longer a debate on the existence of global warming, at least not in the scientific community. There is a broad consensus among professionals: 90% consider that the rise in global temperatures is an alarming, proven fact, while 82% agree that global warming is strongly linked to human activity. The climate is a complex model and varies due to many different parameters. Solar activity, eruptions and sea currents have major impacts in the short term, and even in the medium and long term. However, today human activity is the prevailing force in global warming, which was never previously the case. It is true that a milder winter has short-term benefits, such as lower energy consumption. However, in the long term, there are many negative aspects to this phenomenon. A succession of mild winters would lastingly affect the quality of cropland by lowering the water tables that supply it, for example. Warmer winters may also disrupt whole ecosystems or foster the spread of diseases (as the cold kills more insects, which are disease vectors).
<urn:uuid:b1bdd9bd-026c-4669-ae44-abf1275aca7d>
CC-MAIN-2016-26
http://www.cop21.gouv.fr/en/getting-rid-of-received-ideas/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00103-ip-10-164-35-72.ec2.internal.warc.gz
en
0.963069
679
3.703125
4
The largest earthquake (magnitude 9.5) of the 20th century occurred on May 22, 1960 off the coast of south central Chile. It generated a Pacific-wide tsunami, which was destructive locally in Chile and throughout the Pacific Ocean. The tsunami killed an estimated 2,300 people in Chile. There was tremendous loss of life and property in the Hawaiian Islands, in Japan and elsewhere in the Pacific. Destructive waves in Hilo, Hawaii, destroyed the waterfront and killed 61 people. Total damage was estimated at more than US $500 million (1960 dollars).
<urn:uuid:bc178d31-25a9-4440-928c-23cfc4c19ca0>
CC-MAIN-2016-26
http://www.bom.gov.au/tsunami/history/1960.shtml
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395546.12/warc/CC-MAIN-20160624154955-00130-ip-10-164-35-72.ec2.internal.warc.gz
en
0.961827
116
3.421875
3
Learn something new every day More Info... by email An ophthalmology technician is a person who supports an ophthalmologist, a medical specialist who detects and treat eye disease, in offering eye care for patients. Also known as an ophthalmic technologist, an ophthalmology technician is an allied health professional who works in hospitals, clinics, medical centers, and private practices. Responsibilities typically include carrying out diagnostic exams, such as measuring a patient's vision, maintaining equipment, and clarifying concepts to patients. In addition, a technician may assist during eye surgery and needs to understand about ophthalmic pharmacology and corrective lenses. Working for a ophthalmologist, a technician gathers information requested by the eye specialist. A technician performs various daily duties, such as collecting patients' medical histories, and gathering eye measurements. During an eye evaluation, an ophthalmology technician looks at eye muscle function and measures eye pressure, muscle movement, and pupil reactions. Also during an eye exam, he will record a patient's scope of vision and color vision. During surgery, a technician provides assistance by getting the surgical room ready and helping to monitor the patient. Ophthalmic technicians are skilled in using ophthalmic instruments, such as phoropters, tonometers, sonographers, and ultrasounds. As a technician becomes more advanced, he gains skills in ocular motility, administering prescriptions, and ophthalmic imaging. With more training and skill sets, a technician is able to explain various surgical procedures and medicines. A person trained as an ophthalmic technician may advance in his career. Eventually, he may become an office manager or ophthalmic medical technologist. Down the road, an ophthalmic technician also may opt to become a certified medical technologist and become qualified to serve as a surgical assistant. To become an ophthalmology technician, a person needs to have a high school diploma or equivalent. A person will generally need additional education, such as completing a one- or two-year program accredited by the Commission on Accreditation of Allied Health Education Programs. Classes typically revolve around anatomy, physiology, ophthalmic optics, and microbiology. Other courses cover ophthalmic pharmacology and diseases of the eye. Another way to become an ophthalmology technician is to gain employment as a ophthalmic assistant and advance to the technician position. A salary of an ophthalmology technician may vary, depending on his skills and professional experience. Typically, a person can expect to make anywhere between $30,000 to $70,000 in the field. Generally, an individual who works at surgical centers and assists with surgical procedures will have a higher salary than a person who works for a private practice. One of our editors will review your suggestion and make changes if warranted. Note that depending on the number of suggestions we receive, this can take anywhere from a few hours to a few days. Thank you for helping to improve wiseGEEK!
<urn:uuid:35ad8705-81ee-440a-b812-e20b3109d989>
CC-MAIN-2016-26
http://www.wisegeek.com/what-does-an-ophthalmology-technician-do.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.25/warc/CC-MAIN-20160624154956-00100-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959309
603
3.34375
3
St. Rose Elementary students experiment with erosion From staff and wire reports - Jan 24, 2013 Students in Kristy Mascarella’s 5th grade class at St. Rose Elementary are learning about erosion. They learned about the causes of erosion, factors that influence erosion and ways we can prevent or decrease the amount of erosion that takes place in different areas. |heraldguide.com is a supplement to St. Charles Herald Guide. Copyright © 2001 - 2016 St. Charles Herald Guide, Inc. All rights reserved. Please contact our WebMaster if you experience problems with the website.
<urn:uuid:77472c5f-93a8-43f8-8be9-7bd524a895af>
CC-MAIN-2016-26
http://www.heraldguide.com/printer_friendly.php?id=11997
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.871843
124
2.734375
3
KIDS: Hey! We're here for the party. GIRL: There's no party and I don't even know you! SARAH LARSEN, REPORTER: You're careful about who you let into your house, right? But are you as careful when it comes to your computer? The internet gives you access to heaps of stuff, but there's plenty of nasties out there too. It's called Malware - that's short for malicious, or nasty, software and this is it in action. Malware is any software designed to do something you don't want. Computer experts say it's a huge problem around the world. Some reckon of all the software written today, more than half is malware. And most people don't even realise it's there. So let's have a look at some different types of malware that you could be letting in. First there's the virus. These nasty little critters piggyback on things like emails and downloads to get into your computer. They act like a real virus, copying themselves and then getting up to all sorts of things. They can make your computer sick, destroy your files, or just remind you that they're there in annoying ways. And here's a friend of the virus; the worm. They can sneak into your computer by themselves and cause all sorts of havoc. Next is the trojan horse. It's a program that looks like something you want, like a game, a download or another piece of software so you let it in and it goes berserk. There's a very sneaky one called spyware. It can watch and record everything you do what's on your screen. Even your key strokes. Your personal details, your credit card number, the names and email addresses of your friends all can be recorded by spyware. REPORTER: So you can see why malware can be more than a nuisance. So why do people do it? Sometimes it's done as a prank, other times it's more serious. Sometimes businesses use spyware to collect information about what you are like, to help them sell you things. Criminals can use it to steal money or information. Think of all the ways we use the internet from banking to your school and medical records and think of the damage that malware could do. But you can do things to protect yourself. Like, if you get an email from someone you don't know and there's an attachment, don't open it. It could be malware. And you might have seen messages like this. Don't get too excited, you probably haven't won anything! And if you click that link you could be letting in a nasty piece of software. Malware can lurk in random websites and downloading free games, videos, music and software can be risky. You can also get special scanning software to stop nasties sneaking in. Malware will probably always be there but if you're better at recognising it you can keep it from getting in!
<urn:uuid:321ac037-b923-4d2d-ac89-61302eac9dcc>
CC-MAIN-2016-26
http://www.abc.net.au/btn/story/s2316644.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399428.8/warc/CC-MAIN-20160624154959-00129-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966299
615
2.921875
3
Since 2006, farmers, scientists and apiarists have been mystified by the systematic disappearance of bees throughout the Northern Hemisphere. The phenomenon, known as “colony collapse disorder,” has wiped out up to 90% of the bee population during some seasons. The environmental consequences could be devastating - bees are responsible for pollinating wildflowers, which could affect the bird and butterfly populations, as well as the nation’s fruit and vegetable crops. Here in California, commercial growers rely on honeybees to pollinate avocado, almond, cherry, and plum trees. Colony collapse disorder has been blamed on everything from climate change, cell phones and malnutrition to genetically modified crops and parasites that turn bees into “zombies.” But two studies just published in the journal Science now point to a certain class of pesticides as the cause. In a study conducted at Scotland’s University of Stirling, bumblebees exposed to neonicotinoids, which are used to control aphids and beetles, produced 85% fewer queens and gained up to 12% less weight than control colonies. Another study from the French National Institute for Agricultural Research, which focused on honeybees, found that the pesticide caused the bees to become “intoxicated” and unable to find their way back to their hives. Is this the final word on colony collapse? Are there alternative ways to protect our food supply and our bee population? What can be done to save the world’s bees – or is it already too late? Jeff Pettis, research leader, U.S. Department of Agriculture’s Bee Research Laboratory
<urn:uuid:c1a6d57d-9e99-4a72-a061-5897eec6c655>
CC-MAIN-2016-26
http://www.scpr.org/programs/patt-morrison/2012/03/30/25816/new-studies-point-to-pesticides-for-bee-disappeara/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00066-ip-10-164-35-72.ec2.internal.warc.gz
en
0.94668
333
3.515625
4
Showing 1-24 of 119 items found in History 8th Judicial Circuit Marker One of the last remaining markers erected in 1922 marks the 8th Judicial Circuit on which Abraham Lincoln practiced law. A.L. Van Den Bergen Statue " Abraham Lincoln" This bronze statue was originally dedicated in 1931 to commemorate Lincoln's "Fool the People" speech. Abe Lincoln Mural Located in downtown Mount Pulaski, this mural depicts a young Abraham Lincoln in front of the historic Mount Pulaski House. Abe Lincoln's Talking House Driving Tour Tune your vehicle's radio to 1650 AM or 1620 AM and listen to the history behind 14 homes in Pittsfield that have a connection to Abe Lincoln. Front yard signs also explain each home's historical significance. Abraham Lincoln Long Nine Museum Has electronic audio narrated dioramas that depict Abe the railsplitter, the self-taught scholar, the story teller, the lawyer and the politician. Abraham Lincoln National Cemetery This cemetery was named after the 16th president of the United States, and was designed to serve approximately one million Chicago metropolitan area veterans. Abraham Lincoln Presidential Library and Museum This museum is one of the most-visited presidential museums in the nation where visitors can experience the entire Lincoln story under one roof, from Abe's humble beginnings in an Indiana log cabin to his days as president in the White House. Be dazzled by two special effects theaters featuring historical ghosts and a Civil War battlefield, life-like vignettes that depict important moments in the president’s life, and artifacts that range from Lincoln’s stovepipe hat to an original copy of the Gettysburg Address. Adlai E. Stevenson Historic Home Adlai Stevenson II was an important and influential figure in the political history of the United States. Stevenson was Governor of Illinois from 1949 to 1953 and ran twice for President as the Democratic National Candidate in 1952 and 1956. He also served as Ambassador to the United Nations from 1961 - 1965. The grounds are open daily for self-guided tours. The peaceful setting allows visitors to experience the historic landscape similar to when the family lived in the house. The house has been designated a National Historic Landmark. Group tours can be arranged through the Forest Preserves - 847-968-3422. Constructed in 1857 as the southern division of the Illinois State Supreme Court, Abraham Lincoln successfully argued a famous tax case in 1859. In 1888, Clara Barton used the building as a hospital. Tours are available. Please call in advance. Apple River Fort State Historic Site Apple River Fort State Historic Site, located in Elizabeth, Illinois, is the site of one of the battles fought during the Black Hawk War. Black Hawk and his 200 warriors attacked the hastily erected fort on June 24, 1832. His story and that of the early settlers are told. Atlanta Heritage Waysides Located at the Atlanta Museum, these three exhibits and 20 other prints depict a variety of Lincoln and Logan County events. It is located at the site of an early political rally during Abraham Lincoln's campaign for President. Exhibits focused on Abraham Lincoln, Route 66, and other aspects of Atlanta’s history are featured. The Museum’s Local History Resource Center provides extensive genealogy materials accessible to the public. Housed in a beautifully restored 1867 building, the Atlanta Museum presents both permanent and new, rotating exhibits. Open Monday through Saturday 9 a.m. to 4:30 p.m. Closed Sundays. Atlanta's Abraham Lincoln Interpretation Site The site of an early political rally during Abraham Lincoln's campaign for President, now showcasing an interpretive sign explaining the historic significance. Batavia Depot Museum Experience railroad and war history alongside Batavia-related exhibits. The original bed and dresser from Mary Todd Lincoln's room at Bellview Sanitarium are displayed here. Big River State Forest Encompassing more than 3,000 acres along the Mississippi River, Big River State Forest is a remnant of woodland that once bordered the vast prairies. The 1-½ mile Lincoln Hiking Trail commemorates Abraham Lincoln's march through the area in 1832. Blackhawk War Monument This monument is located on the site of Kellogg's Grove, an early settlement established in 1827 on a mail route between Peoria and Galena, and now on the National Register of Historic Places. It honors those killed in the Blackhawk War, including in the final Illinois Battle which occurred at this grove in June, 1832. Abraham Lincoln, a member of the Illinois militia, helped bury five of the slain men. The remaining soldiers were originally buried throughout the area at the spots at which they fell. Fifty years after the war, local farmers collected the remains and buried them in one enclosure on top of this hill overlooking the Yellow Creek Valley. The 34-foot high monument was dedicated in 1886. Bobby's Bike Hike Chicago Tours Tour Chicago on a cool cruiser-style bicycle and follow a guide who makes brief stops at the most popular sights, providing light-hearted commentary that will keep you entertained. Some fun rides include the Lakefront Neighborhoods Tour, Bikes, Bites and Brews Tour, and the Southside Gangster Tour. Bryant Cottage was built in 1856 by Francis E. Bryant (1818-1889), a friend and political ally of Senator Stephen A. Douglas. According to Bryant family tradition, on the evening of July 29, 1858, Douglas and Abraham Lincoln conferred in the parlor of this house to plan the famous Lincoln-Douglas Debates. The picturesque one-story, four-room wood frame cottage has been “restored” and is interpreted as an example of a middle-class life in mid-nineteenth-century Illinois. The furniture on display is of the Renaissance Revival style, appropriate for a small-town family of the mid-nineteenth century. The cottage is accessible to persons with disabilities. The site hosts portions of a variety of locally sponsored events throughout the year. C.H. Moore Homestead Listed on the National Register of Historic Places, this restored mansion and grounds whisk visitors back to the Victorian era. Once home to Clinton attorney Clifton H. Moore, visitors will enjoy tours and stories of the friend and law partner of Abraham Lincoln who one resided there. Home of the DeWitt County Museum. Chicago Trolley and Double Decker Co. We operate Chicago’s premier Hop On Hop Off ® city sightseeing tours in the classic red & green Trolleys and fun-filled Double Decker buses. We also offer private group transportation for special events such as weddings, parties, and corporate outings. For 19 years the Hop On Hop Off® sightseeing tour has been the gold standard for entertaining and informative tours. Covering 13 miles and 14 stops, the Signature Tour is an eye-popping adventure through the heart of Chicago, giving you the option of Hopping On and Off at your choice of stops to visit the hottest retail, cultural, and family attractions. Summer tours include neighborhood tours and night tours. Christian County Historical Society Museum See an 1820s log house, the 1839 Christian County courthouse where Lincoln argued cases, an 1854 farmhouse and an 1856 one-room school. Also view military weapons from five wars, a collection of 1800s antiques and much more. Clark County Museum The Clark County Historical Society is dedicated to the preservation and education of all things pertaining to the people and places of Clark County, Illinois. Learn about the Lincoln-Douglas debates and unique area country architecture here. Cruisin' With Lincoln on 66 The main source for information about McLean County’s historic and modern attractions in the Bloomington-Normal area is the "Cruisin’ with Lincoln on 66" Visitors Center. Located in Downtown Bloomington, the Visitors Center provides information on all of the wonderful attractions, events, dining and lodging available in McLean County. Their exhibits highlight two types of heritage tourism that is integral to Central Illinois: Historic Route 66 and Abraham Lincoln. The gift shop is filled with local products, memorabilia and more! Cruisin' with Lincoln on 66 Visitors Center Located in the heart of Downtown Bloomington on Historic Route 66, the Cruisin with Lincoln on 66 Visitor Center exhibits cover stories about dining, lodging and travel, which were experienced by both Abraham Lincoln and Route 66 travelers. These are supplemented by local items, books, cards, maps and more!
<urn:uuid:6bec616b-d563-4c6f-952d-c666ba9f20db>
CC-MAIN-2016-26
http://www.enjoyillinois.com/thingstodo/2?sortascending=True&sortby=name&tagids=56
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393332.57/warc/CC-MAIN-20160624154953-00157-ip-10-164-35-72.ec2.internal.warc.gz
en
0.922007
1,758
3.015625
3
Cubism from Scratch Use PROCESS, CREATIVE EXPERIMENTATION, and DISCOVERY to foster Independent Creative Work Habits. Students practice the construction of knowledge. Students to not imitate. They INNOVATE. They do not work from examples. They CREATE their own ideas. by Marvin Bartel, Ed.D. © 2001 On the right is a practice drawing using a three-dimensional paper duck as a study. Note that it has been drawn from different distances and from different views all within the same space. Practice observation drawing Learn to compose shapes, lines, and colors Learn about principles of composition including time, motion, emphasis, Encourage creative divergent thinking and experimental work habits Change habits of work and thinking Foster a collaborative art studio atmosphere Avoid becoming dependent on imitation and copywork Avoid dependence on teacher demonstrations Build self-confidence, natural curiosity, and focus Encourage playfulness, connectedness, and appreciation of nature and human history Learn about an important art style (a way of seeing), art history, art criticism, and aesthetics Age and Grade This is a good lesson for adults and children who have mastered some abstract thinking ability. This lesson is best above second grade, but advanced kindergarten children enjoy it. Teaching the Lesson Do NOT show artwork or say the word cubism until near the end of the lesson. Do NOT demonstrate. Students learn by doing. Have students practice from the motivations behind cubism without first seeing cubist images. Just like real artists are inventors, guide students to make discoveries, we help students discover cubism themselves. Celebrate with them. Help your students develop the habits of thinking used by highly creative people rather than teaching them to emulate artists by copying the mere look of their work. In order to do this, the teacher has studied cubism and has a working understanding of the theories and aesthetic motivations of historic cubism. Traditionally, art historians have supposed that cubism represented a way of seeing our world from multiple viewpoints simultaneously, but now we have strong evidence that Braque and Picasso were influenced by the invention of motion pictures. In 2007, there was a ground-breaking exhibition: Picasso, Braque and Early Film in Cubism at the Pace Wildenstein in Brooklyn, New York, April 20 – June 23, 2007. Arne Glimcher and Bernice Rose invented and curated this very innovative exhibition that illustrates the influences of early motion picture film on minds of Picasso and Braque. Numerous art historians and painters have studied cubism for nearly 100 years and have never seen what has long seemed very obvious to Arne Glimcher. See sources below: #1 Micchelli, #2 Rose The teacher guides the students who learn to set up a large still life in the middle of the room or several small setups in the middle of their work tables. They bring in sporting stuff, stuffed toys, musical instruments, some cloth, a few dry weeds, and so on. Depending of the season, some teachers bring large sunflowers, grapes, gourds, squash, onions, eggplant, apples, and so forth from the garden. Cut a few of these in half. Taste and smell are excellent multi-sensory motivation. Another variation uses one or two student models that move according to teacher prompts to simulate a dance motion or an athletic action. In the variation below, two chickens move about while the class draws them. The pencil drawing on the right was made in the author's adult drawing class. It is a practice observation drawing of two chickens in motion drawn with the instructions to keep drawing in the same space while the chickens are moving. The instructions included the use of a blinder and not looking at the paper while the pencil is in motion. Drawing © Donn Odle 2008 Distribute the materials before discussing the process and giving drawing directions. This avoids disrupting them when they are ready to start working. Use any drawing media that students are already familiar with. Otherwise use a warm up to familiarize students with the material. Select paper that is large enough for the drawing tools and art media being used. For charcoal, pastels, oil pastels and paints you could use 12 x 18 or larger. If they work with drawing pencils, ink, ball point, or with small brushes, use a smaller size so it does not take too long. This might depend on the the age and prior experience of the students. the Creative Process If working at tables (observing a still-life or animal), encourage students to stand up while drawing so they use arm motions instead finger motions. Ask them to begin by selecting an interesting area in the setup and drawing very large so things go off the edges. Cardboard viewfinders (or empty 35 mm slide frames) are helpful in finding and sizing things. If it is a still-life, students work for few minutes until the teacher has them move to a completely different position and continue drawing the same objects on the same paper overlapping with the drawing they started (or move the still-life). Ask thinking questions and experiment questions. "What happens when you change the size or scale when you change position? For those that have been drawing large: "What happens when you add small detail?" For those that draw small: "What happens when you make the next part very much larger? How does it seem to move in and out in from your paper?" As much as posible, try to use open questions and "what if" questions rather than commands or suggestions. Repeat drawing and moving to a new position until the paper begins to fill with overlapping and transparent drawing After a few moves, invite students to slowly walk around to see how other students have worked at the problem. Affirm a diversity of approaches. Ask them a series of open questions to make them aware of motion and time. "How do the drawings suggest motion? Does anything in the drawings look farther away or closer to you? How does this happen? What things are repeated with variation? Can you see things about the drawings that move you into the drawing or away from the drawing? Do you see the effects of size change, of repetition, of gradation, and so on?" As the paper begins to fill with overlapping shapes, ask them what happens when you shade in and color the drawing to create an overall pattern. What is the effect of gradations? What if they include some recognizable places here and there? Can the evaluation is to be more on overall design and movement than on realism? Ask them how they can make adjustments in the compositions to achieve unity and harmony so that no one area becomes too dominant or different than the whole. When most of them appear to be nearly complete, or when the first to finish feel they are done, have them all stop and form groups of three. - Prohibit negative responses. Encourage the use of questions that analyze and speculate. Using six eyes instead of two, ask them to look at each other's work and tell them what parts of their pictures they notice first and why. What parts are showing most emphasis and what parts show the least emphasis. Encourage every student to participate, to form questions, to describe what is noticed, to analyze, and to speculate. - Ask them to discuss time and motion in the works. They are not to use judgmental terms like good or bad, just say what they see that shows the most and try to give some reasons and explanations. If a student asks the teacher to tell what to do next or if it is good enough, the teacher asks them a question that gets them remember the process or to look at parts that they may have missed. The teacher refrains from telling them what to do. The teacher does not make a suggestion. The teacher gives them open choices rather than commands or directions. The product is not supposed to have a certain look, but the students are supposed to learn to make their own artistic choices based on criteria the teacher gives. Resist the temptation to make specific suggestions. Student thinking is cultivated better when the teacher honors the student ideas and does not do the thinking for the students. When they are done, have them post the work for all to see. Discuss the work by again asking what they notice first. Do not allow negative comments. Follow the initial response by asking for explanations of why they notice certain things. This is not judging, it is describing and analyzing. If students miss things, the teacher asks about them. "Why do I see motion in this drawing?" Sometimes it is also interesting to speculate about the meaning of their pictures (interpretation). Making up titles helps with this. Art in Everyday Life Show one or more example(s) of Georges and or Picasso who invented cubism (use any general reference art history book, library books on artists, slides, reproductions, posters, and/or the internet). It is quite easy to print color pictures from the web onto transparencies blanks made for ink jet printers (footnote web sources). These can be shown in a class with an overhead projector if your class does not have a computer projector to show them directly from the web site. Kennedy - Rose Ask them to speculate about the process the artist(s) must have used to come up with their compositions. Ask them how they think the artist was looking at the work. Ask them to speculate about the reasons the artist decided not to simply show a simple picture of the subject matter. - Ask them to remember the way motion pictures move from clip to clip to tell a story.Kennedy - Rose Explain the word Cubism and give a bit of background on how innovative it was in the art world at the time it was is a contemporary British artist who has played with these concepts by using photography to make many pictures of of the same thing and putting them all together in a composition that gives what he feels is a much more realistic impression of how we perceive the world. He likens the typical camera's photograph to the view of one eyed single impression Cyclops. He claims that as humans we really see the world by mentally composing reality from many visual impressions of a subject or scene. Which is realism? - Ask the students to write a short paragraph about what kind of art they think Picasso and Braque would invent if they where living today with cell phones, high speed Internet, and space travel. How does all this connect to our lives outside the art room? How is art and life connected? How are the events of a day connected to each other and overlapping with each other? Students are asked to make a list of everyday experiences that could be Students are asked to make sketchbook entries that cover a portion of a typical day all in one overlapping and transparent composition. For example, each sketch combines several aspects of the morning trip to school or the afternoon trip home. Aesthetically, they are encouraged to reflect on the differences in their feelings in the morning compared to their feelings in the afternoon. How does is difference in feeling represented in their cubist time sequence compositions. Could it be done with color relationships, with size, with line type, or another device? Review is very efficient use of class time. After reviewing something several times, it is much more apt to be remembered and used beneficially in another project. Sometimes there is a minute or two after cleanup time before the bell rings. Even if the bell rings before a question is answered, it is still good to raise the question. Ask a review question. Ask an art vocabulary question. What does "emphasis" mean in a composition? What does "unity" mean? What are the differences between "unity" and "harmony"? How are artists similar to inventors? more or less realistic than realism? - How is the passage of time be shown in a drawing? - Which is more beautiful, movement or symmetry and stability? - What are the ways to show motion in a drawing? - Which of you previous projects would be more fun if they included what we learned about motion today? Review is even more effective if it is done again at the beginning of the next session a day or more later. When a teacher expects students to remember things from session to session, students thinking habits are gradually trained to remember. They learn to expect that what is being learned has a purpose and it is to be incorporated into the next project. Ask questions that connect previous learning with today's questions and artwork. To encourage creativity, pose questions that will be coming up in art class in the near future. I try to respond with enthusiasm to unexpected results--even when they are unexpected by me. "Wow! How did you do that?" <><> END OF LESSON <><> This lesson was inspired by a similar lesson developed and taught by Judy Wenig-Horswell, Associate Professor of Art, Goshen College. Not enough time to do this lesson? Do not take shortcuts. Think of it as a unit that continues for as many sessions as are needed to do it well. Start each session with warm-up and review. Many more things are learned when we take the time to do something well. Teaching many short lessons leaves the impression that art is quick and easy. Art is not a bunch of products. It is a way of thinking and working that materializes and expresses ideas. Artist know that things worth doing take time and may require lots of experimentation. SOURCES AND REFERENCES USED ABOVE - - - -Top of page Kennedy, R. (2007) “When Picasso and Braque Went to the Movies.” New York Times, April 15, 2007 Rose, C. (2007) “A discussion about Picasso, Braque and Early Film in Cubism with Bernice Rose and Arne Glimcher in Art & Design” Friday, June 8, 2007 ” Charlie Rose, © 2010 http://www.charlierose.com/view/interview/8540 [retrieved 12/14/2010] All Rights reserved © 2001, 2nd edition in 2008, by Marvin Bartel, Emeritus Professor of Art, Goshen College. All text and photo rights reserved. are invited to link this page to your page. For permission to reproduce or place this page on your site or to make printed copies, contact the author. Marvin Bartel, Ed.D., Emeritus Professor of Art Adjunct in Art Education Goshen College, 1700 South Main St., Goshen IN 46526 updated: December 2008 A link to lessons on Cubism from New Zealand Goshen, IN - USA
<urn:uuid:2d3894b3-279e-4334-9de4-e0e68f932e7c>
CC-MAIN-2016-26
http://people.goshen.edu/~marvinpb/lessons/cubism.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394987.40/warc/CC-MAIN-20160624154954-00127-ip-10-164-35-72.ec2.internal.warc.gz
en
0.940914
3,177
3.84375
4
The Basics of Solar Energy Solar energy is the primary source of energy on Earth and is responsible for almost all natural processes (see how does solar energy work). Our Earth receives roughly 1000 watts (that’s a Joule / second) of solar energy per square meter of land area, and amazingly enough all this is generated by thermonuclear explosions within the sun. Part of the massive amount of energy released by the sun reaches the earth in the form of solar radiation although most is reflected or absorbed before it reaches the Earth’s surface. Solar energy usually refers to different ways the sun’s energy can be used to generate solar power. How Solar Energy becomes Solar Power Solar power is generated by a) a surface that collects solar energy, and b) a method of converting the captured energy into electricity. There are two approaches: - Direct or photovoltaic conversion: Sunlight can be converted directly into electricity by using solar panels. Solar panels are comprised of photovoltaic cells or solar cells that are arranged in a grid-like pattern on its surface. The individual solar cells are made of special semiconducting materials like silicon. When solar energy strikes this surface, the solar cell converts the solar radiation into useful electrical energy (a direct current) which can be used in the home (see also: how do solar panels work). - Indirect or solar thermal conversion: In this method, sun’s energy is used to create heat to boil water. Mirrors or reflectors are used to concentrate sunlight (like a magnifying glass) onto containers full of liquid (it’s often water, but other liquids are also used which retain heat better). The liquids are heated up to a high-enough temperatures to produce steam, and this steam is used to rotate a steam turbine that produces electricity (see also: solar thermal energy facts). Solar power is also used directly to power all sorts of electronic equipment, from handheld calculators, to power emergency road signs or call boxes, overhead lights in parking lots, and even some experimental vehicles (see portable solar power systems). Solar power is also used by satellites, where array of solar cells provide reliable power for the satellite’s electrical systems. More: How does solar power work? (Infographic)
<urn:uuid:f324c99d-be24-428d-92de-cd7c92e3f0a9>
CC-MAIN-2016-26
http://solarenergyfactsblog.com/solar-energy-basics/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00024-ip-10-164-35-72.ec2.internal.warc.gz
en
0.907639
468
4.1875
4
Infectious mononucleosis or “mono” is an illness that afflicts teenagers and young adults, mainly ages 14 to 30. It has been estimated that approximately 50 percent of students have had mono by the time they enroll in college. Mono is caused by the Epstein-Barr virus, a member of the Herpes family of viruses, but there are other viruses that may produce a mono-like illness. The disease usually occurs sporadically and outbreaks are rare. Many times the symptoms are so mild it isn't even recognized. In underdeveloped countries, people are exposed to the virus in early childhood, when they aren't likely to develop noticeable symptoms. In developed countries such as the United States, the age of first exposure may be delayed until older childhood and young adulthood, when symptoms are more likely to occur. The most common symptoms include excessive fatigue, headache, loss of appetite, sore throat, swelling of the tonsils, enlarged lymph nodes (swollen glands) in the neck, underarms, and groin. A low-grade fever occurs at first, and then rises to above 100 degrees after the third or fourth day. Sometimes, the liver and spleen are affected and enlarged. The disease lasts one to several weeks. A small proportion of affected people can take months to return to their normal energy level. The Epstein-Barr virus is found in moist exhaled air and secretions from the nose and throat. It isn't as contagious as many other viruses but may be transmitted by direct contact, which explains the origin of mono's nickname as the “kissing disease.” How Soon Do the Symptoms Appear? Symptoms appear from four to six weeks after exposure, but may be the same as many other illnesses, such as the common colds or strep throat. For this reason, it is particularly difficult to diagnose mono in the early stages of illness. The diagnosis is helped by two blood tests: one that looks at an increase in a specific type of white blood cell and another that identifies an antibody which is present when a person has mononucleosis. No treatment other than rest is needed in the vast majority of affected people. Due to the risk of rupture of the spleen, contact sports should be avoided until clearance has been given by a physician. On rare occasions, a short-term course of steroids like prednisone may be of value for extreme throat swelling that inhibits swallowing or endangers breathing. Steroids don't cure the disease, but serve to reduce the inflammatory response. In most cases of mononucleosis, hospitalization isn't necessary. Currently, there is no vaccine available to prevent infectious mononucleosis. People who have had mono can shed the virus periodically in their saliva for the rest of their lives. Excerpted from The Complete Idiot's Guide to Dangerous Diseases and Epidemics © 2002 by David Perlin, Ph.D., and Ann Cohen. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement with Alpha Books, a member of Penguin Group (USA) Inc.
<urn:uuid:81afb1a8-fa3e-441d-908b-b95432763e77>
CC-MAIN-2016-26
http://www.infoplease.com/cig/dangerous-diseases-epidemics/infectious-mononucleosis.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398869.97/warc/CC-MAIN-20160624154958-00058-ip-10-164-35-72.ec2.internal.warc.gz
en
0.966052
636
4.09375
4
From Our 2010 Archives Cholesterol Levels Fluctuate With Menstrual Cycle Latest Cholesterol News TUESDAY, Aug. 10 (HealthDay News) -- Women's cholesterol levels vary throughout their menstrual cycle as their levels of estrogen rise and fall, a new study reveals. This means that to get a clear picture of a woman's cholesterol levels, doctors may need to take readings over several months before deciding whether the patient needs to have her levels lowered, the researchers noted. "Doctors who are looking at women [for] high cholesterol have to take into account the phase of the menstrual cycle they are at when they take the measurement," said study co-author Enrique F. Schisterman, chief of the Epidemiology Branch at the Eunice Kennedy Shriver National Institute of Child Health and Human Development. To make cholesterol readings more consistent and reliable, measurements should be taken at the same time each month for a couple of cycles, Schisterman added. "Practically, it's easier to recognize the beginning of a cycle," he said. "So if you do it consistently at the beginning of the cycle then you will get consistent measures over time." The report is published in the current online edition of the Journal of Clinical Endocrinology and Metabolism. For the study, Schisterman's group compared levels of estrogen with cholesterol and triglyceride levels in 259 healthy women, aged 18 to 44. Most of the women (94%), had 14 or more measurements taken over two menstrual cycles. The women also charted the phases of their cycles using at-home fertility monitors that detect hormone levels indicating ovulation. Most of the women were physically active and did not smoke. Only 5% had cholesterol levels higher than 200 mg/dL, which is borderline high-risk for heart disease. But, cholesterol levels among 19.7% of the women reached 200 mg/dL at least once. In addition, some obese women over 40 had greater fluctuation in cholesterol levels than did the rest of the group, the researchers noted. The researchers found that as estrogen levels rise, HDL, or "good" cholesterol also rises, peaking at ovulation. At the same time, as estrogen levels increased, total and LDL, or "bad" cholesterol, as well as levels of triglycerides, fell, Schisterman's team found. This decline began a couple of days after estrogen levels peaked at ovulation. In addition, levels of total cholesterol, LDL cholesterol and triglycerides were lowest just before the start of menstruation, the researchers noted. "This is more recognition that hormones play a very important role in women's lives on all levels, including basic tests, like the test for cholesterol," Schisterman said. "The menstrual cycle plays a very important role in women's overall health." Dr. Jennifer Glueck, an assistant professor of clinical medicine in the division of endocrinology, diabetes and metabolism at the University of Miami Miller School of Medicine, said that "I really wasn't aware that the levels of the lipids could fluctuate like that over the course of the menstrual cycle." However, the finding may not be particularly clinically relevant to this group of young women, Glueck said. "These are young healthy women that you wouldn't be considering to start cholesterol-lowering medications on," she said. "It doesn't seem like it pushed them into categories where you would initiate treatment." So while the finding is interesting, it probably won't change clinical practice, she noted. Copyright © 2010 HealthDay. All rights reserved. SOURCES: Enrique F. Schisterman, Ph.D., chief, Epidemiology Branch, Eunice Kennedy Shriver National Institute of Child Health and Human Development; Jennifer Glueck, an assistant professor, clinical medicine, division of endocrinology, diabetes and metabolism, University of Miami Miller School of Medicine; Aug. 10, 2010, Journal of Clinical Endocrinology and Metabolism, online
<urn:uuid:0e675c85-4a38-4549-ad18-402794ad4399>
CC-MAIN-2016-26
http://www.medicinenet.com/script/main/art.asp?articlekey=118810
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397744.64/warc/CC-MAIN-20160624154957-00086-ip-10-164-35-72.ec2.internal.warc.gz
en
0.946238
821
2.65625
3
More young people and adults in their 30s and 40s are being hospitalized for stroke, even as stroke rates are dropping in older people, new data show. The findings, reported this week at the American Stroke Association conference in Dallas, may be a sign that that rising rates of obesity, diabetes and high blood pressure among teenagers and young adults are taking a toll. Or it may simply be that physicians have improved their diagnosis and reporting of stroke in young people during the past decade. Ischemic stroke occurs when a clot or narrowing of the arteries stifles the blood supply to the brain. Analysts at the Centers for Disease Control and Prevention reviewed the number of acute ischemic stroke hospitalizations by age and sex from 1994 to 2007. They found that stroke hospitalizations among men and women 45 and older have fallen by 25 and 29 percent, respectively. But stroke hospitalizations rose sharply among men and women ages 15 to 44, including a 51-percent jump among 15- to 34-year-old men. There were also notable increases among children, though the number of strokes in children remains very small over all. The study found increases of more than 30 percent in boys and girls ages 5 to 14. Hospitalization for strokes declined, however, in girls younger than 5.
<urn:uuid:9d9625fb-db27-4af3-9067-29aba35d1eff>
CC-MAIN-2016-26
http://www.vegsource.com/news/2011/02/stroke-rising-among-young-people.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783394605.61/warc/CC-MAIN-20160624154954-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970548
261
2.703125
3
For many living in the harsh and desolate deserts of south Jordan, life without electricity is the norm. Either the infrastructure which provides electricity doesn’t reach them or they simply don’t have the money to afford it. However, all that looks set to change as two women bring to light the advantages of solar energy. Two Jordanian Bedouin women have recently returned from a six-month course at a unique college in India where they were trained as solar engineers. The two women, who are illiterate and have never been employed, were carefully selected by the elders in the village to attend the course at Barefoot college in India which helps poor rural communities become more sustainable. “We’ve been taught about solar energy and solar panels and how to generate light,” explains Rafi’a Abdul Hamid, a mother of four who lives in a tent in the deserts of south Jordan. “Hopefully when we return we will be able to teach others and use everything we’ve learnt here in India to improve our village.” Building Sustainable Bedouin Communities Many of the Bedouin communities in Jordan which previously lived off their herds, are now highly dependent on government handouts. They usually make up the poorest sector of society and have a very low standard of living. As such the government sees this project as a strategic way to encourage these poor villages to generate their own energy and also become more self-sufficient. Raouf Dabbas, the senior advisor to the Ministry of Environment in Jordan told Green Prophet: “Providing this green technology to the rural community, whilst it will not have a major impact on reducing climate change, it will have a profound impact on the socio-economic position of the bedouins and it will help improve their standard of living.” The project is also seen as a stepping stone towards Jordan’s rather ambitious plans to source 20% of its energy mix from sustainable sources by 2020. “This is certainly one step in that direction,” adds Dabbas. “Jordan currently imports 98% of its oil and energy from the outside and at a time when crude oil prices are unstable, Jordan must actively look for sustainable forms of energy.” Realising the Potential of Renewable Energy As such, this project is not only about training women to help bring solar power to poor and remote villages but its also about demonstrating that renewable energy can improve people’s daily lives and also cut back emissions. Sponsers are required to help pay for the initial equipment setup but after that it the project will be able to sustain itself through the revenues it generates though excess electricity. Barefoot college launched the solar power course for women in 2005 and already more than 150 grandmothers from 28 countries have been trained. Over 10,000 homes in 100 villages have been solar electrified which has saved 1.5 million litres of kerosene from polluting the atmosphere. With so much success already you can’t help but feel confident that change is also on the way for the sleepy bedouin villages of south Jordan. As Rafi’a insists, “I have no doubt that we are going to achieve a lot- I’m hoping that my life and that of my village will change forever.” :: Image via Barefoot College. The College trains poor, rural women to become Barefoot Solar Engineers who solar electrify their own communities. Barefoot Solar Engineers from 32 countries have been trained by the College since 2004. For more on Jordan see:
<urn:uuid:dbd0e239-5672-4df0-8a10-2118abf73ba2>
CC-MAIN-2016-26
http://www.greenprophet.com/2011/03/bedouin-women-solar-power
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00121-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965335
733
3.21875
3
Some say the fiercest wars occur within a family or nation. The Sierra Club is the United States' largest environmental activist organization - and it's like some raucous families as ugly epithets are hurled in the current in-house controversy. It is ironic that the worst labels are being used by those standing alongside a renegade board against members who are adhering to a stance rooted in the first Earth Day. In 1971 the Sierra Club led in helping Americans accept that a once-frontier country had finite land and resources. Moreover, environmental justice patently required that we rein in our gargantuan resource use so the globe's poorer people could enjoy a rising standard of living. A year later President Nixon's broadly-based Commission on Population Growth and the American Future called for stabilizing population with alacrity. Significantly, it based its recommendations not just on quantitative measures and looming environmental depredation but on qualitative lifestyle values endorsed by most Americans and models like Henry Thoreau. These values included a love of small communities, of uncrowded wilderness and solitude, of low-density housing. Operating in a low-immigration era, the commission mentioned only parenthetically that immigration policy would have to honor population policy. When in 1978 the Sierra Club called upon Congress to review immigration for its demographic and environmental effects, it was merely echoing the commission. In 1988 it renewed this call for immigration levels consistent with stabilizing population. An unchanged message has encountered a changing demographic dynamic. US population growth is no longer due to above-replacement-level births to native-born women; the "baby boom echo" was small and short-lived among boomers who embraced late childbearing and below-replacement fertility. Today, nearly 75 percent of our growth derives from immigration and, more importantly, births to immigrants, which swells the numbers of parents in the next generation. Many environmental and human rights groups now try to convince Americans that it was moral to advocate population stabilization in the earlier instance but immoral in the latter. In February, 1996, a Sierra Club board attuned more to political correctness than to physical reality voted to refrain from taking a position on US population and immigration levels and policies. Today, board backers deride the qualitative concerns of the president's commission for preserving the physical environment cherished by Americans, terming these "elitist" and "nativist." They also wrongly suggest that one must choose between macro and micro environmental issues. They accuse stabilization advocates of wanting to hog their toys and resources. The board and its supporters have renounced that most basic tenet, the systemic nature of environment. They also deny that at some point, quantitative change becomes qualitative change. High consumption levels multiply the effects of any given population. But, how does continuing population growth ease the problems of urban sprawl and congestion, crowded classrooms, farmland loss, and endangered species? The same industrial emissions that threaten urban populations - poor and rich - scarcely nurture deciduous forests or marine life. Massive populations frustrate the hopes of the poor, raise the cost of housing, and estrange urban residents from the natural settings eco-psychologists believe essential for nurturing the spirit. Why is it immoral to resist the 21st-century cities of 20 million, 30 million and 40 million that immigration supporters are foisting upon us? I now tell fellow population policy advocates to dispense with numbers for two reasons. First, the numerate are already with us. Those less numerate are numbed by hearing that the US won World War II with a population half the present day's, or that both the country and its largest state burst past ecologically sustainable population around 1950. Second, there are so many ways to lie with data. Instead, we should demand from those who assert "numbers don't count" proof that Americans share the belief that life in a large, dense city is identical to and as acceptable as life in a small city with clear urban-rural boundaries. We should ask why it is morally suspect to want to leave posterity with a country living well within its ecological limits, with the same lifestyle options previous generations had. We should ask how America can serve as an example to the world if it is unwilling to accept the demographic and ecological constraints it urges upon others. The 1970s mantra, "Think globally, act locally," is even more relevant today in a yet more-crowded world. "Local" action requires accepting that protecting America's habitat requires the end of the immigration era. "Global" thinking requires that we assist other nations in attaining demographic and environmental equilibrium. Living beyond our demographic means will impoverish the entire world. * B. Meredith Burke is a demographer and a senior fellow of Negative Population Growth, a Washington-based policy organization.
<urn:uuid:2d42c025-685e-4b93-9596-6c305dbb09b5>
CC-MAIN-2016-26
http://www.csmonitor.com/1998/0421/042198.opin.opin.1.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403823.74/warc/CC-MAIN-20160624155003-00170-ip-10-164-35-72.ec2.internal.warc.gz
en
0.947967
976
2.65625
3
Achievement Gap Widens, But Test Aims to Close It The College Board's report of a three-point increase on the math SAT is equivalent to getting an additional one-third of a question correct, notes test prep expert Dr. Gary Gruber. Moreover, the score increase obscures the widening gap between the scores of white and minority students over the past decade. That gap grew despite the adoption of National Education Goals in the 1990s and the expenditure of more than $10 billion a year to boost minority achievement under the federal Title I program. The situation is bleakest for black children: The black-white gap on the verbal SAT was 91 points in 1990 but is now 94, and a 96-point math SAT gap has widened to 104 points. Latino children scored 60 points below whites on the verbal SAT in 1990 but now score 67 points below; on the math SAT, the Latino-white gap has widened from 51 points to 63 points. "The gaps among different ethnic groups are widening," warned Gruber, who has written over 30 books on test preparation that have sold over 8 million copies. "Getting one-third of a question correct is not a time to rejoice, but a time to examine how we can really impact student's scores across the board." Gruber has helped design an SAT prep course for TestU, a company formed in August 1999 by a group of educators who wanted to "democratize" education via the Internet. TestU's aim is to provide universal access to high quality and affordable test preparation for the SAT, TOEFL, ACT, and state exit exams such as the New York Regents Exam. The company has enrolled 12,000 students in its customized SAT course. For more information . . . about TestU, visit its Web site at www.testu.com.
<urn:uuid:a2f7508c-5ad9-4bf6-b26d-52e7db795950>
CC-MAIN-2016-26
http://news.heartland.org/newspaper-article/2000/12/01/achievement-gap-widens-test-aims-close-it
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00000-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948763
377
2.671875
3
Don't Tell Her She Can't Succeed NORTHAMPTON, Mass. – When told she will not succeed, a woman's brain can take on an emotional burden that inhibits her ability to succeed, according to a Smith College study that documents, for the first time, the brain regions affected by positive and negative stereotypes. Researchers used functional magnetic resonance imaging (fMRI) to document the brain activity in 54 women between the ages of 18 and 34, after they read a stereotypical message about women and then performed a spatial reasoning task. The task required them to view pictures of objects and describe what the objects would look like from different, imagined perspectives. The group exposed to a negative stereotype made 6 percent more errors than the group exposed to a neutral message, and 14 percent more errors than the group exposed to a positive stereotype. No difference was found in the response time across groups. Poor performance in the negative stereotype group corresponded to increased activity in brain regions associated with increased emotional load. By contrast, the better performance of women in the positive stereotype group was associated with increased activity in visual processing areas and complex short-term memory processing areas. “The results demonstrate the remarkable power of culture in determining performance,” said Maryjane Wraga, associate professor of psychology at Smith, and lead author on the study, published in the journal Social Cognitive and Affective Neuroscience. Despite the differences in performance among the groups exposed to different messages, the messages seemed to operate on an unconscious level, according to Wraga. When asked whether the messages had affected their performance, most participants reported it had not influenced them at all. Moreover, the fact that women in the control group performed worse than those in the positive group suggests that women are not necessarily performing at their top ability under neutral situations. Researchers used a spatial reasoning task and, in particular, one that required mental rotation, because spatial reasoning is thought to play a major role in men's superior performance on measures such as the Scholastic Aptitude Test (SAT). In addition to Wraga, researchers included Smith alumnae Molly Helt and Emily Jacobs, and Smith undergraduate Kerry Sullivan. Helt is now a graduate student in the Department of Neuropsychology at the University of Connecticut, and Jacobs, a graduate student in the Department of Neuroscience at the University of California, Berkeley. The research was supported by a grant from the National Science Foundation and performed at Dartmouth College’s Brain Imaging Center. Office of College Northampton, Massachusetts 01063 Media Relations Director T (413) 585-2190 F (413) 585-2174
<urn:uuid:1db2af5b-0057-4d87-9687-58b9daf06256>
CC-MAIN-2016-26
http://www.smith.edu/newsoffice/releases/06-038.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397864.87/warc/CC-MAIN-20160624154957-00010-ip-10-164-35-72.ec2.internal.warc.gz
en
0.931269
539
2.65625
3
The Nick J. Rahall, II Appalachian Transportation Institute (RTI) is pleased to announce that its LEGO 24-Hour City is now an official Science and Engineering NASA Site of SENSORS is a NASA Leading Educators to Applications, Research, and NASA-related Educational Resources in Science Cooperative Agreement Notice. The purpose is to enhance K-12 science, math, technology, and geography education through Internet-based products derived from NASA mission content. With the use of telerobotics and LEGO RCX robots, students can explore and discover different environments. The Appalachian Transportation Institute is located at Marshall University in Huntington, West Virginia. One of the goals of the RTI is to provide quality transportation-related programs to elementary, middle, and high school students, with the express purpose of attracting students to careers in transportation fields. Each year, a variety of workshops are sponsored which enable students to explore the technologies and issues related to transportation in the United States. The LEGO 24-Hour City at RTI is built around the LEGO Dacta RCX intelligent programmable brick. Using LEGO Dacta’s ROBOLAB software, students at RTI are able to program each of the city’s vehicles and traffic control elements. Previously, using Red Rover software, students at other sites could remotely activate the motors and lights on the City’s LEGO models and run set up programs. Now, using the Internet, students from anywhere can also program the City’s elements themselves and remotely receive information and data from the City. Linda Hamilton, mathematics instructor at Marshall University, working with Chris Rogers of Tufts University Center for Engineering Educational Outreach, has designed the LEGO 24-Hour City at RTI to be operational for remote sensing. Many schools have remotely operated the City’s monorail, gates, traffic counters, and vehicles. Programs like the LEGO 24-Hour City at RTI use hands-on, real-life activities that build interest in engineering, robotics, and remote sensing among young people. This interest will stay with the students throughout their education, and lead them to careers in transportation and traffic fields. A SENSORS LEGO Robotics Workshop was held at Tuft’s University on June 6-8, 2001. Marshall University, NASA Jet Propulsion Laboratory, NASA Glenn, NASA Goddard and NASA Ames representatives planned their SENSORS site and programmed LEGO robotics at the Tufts University SENSORS workshop. LEGO Links of Linda Hamilton [email protected]
<urn:uuid:d298c058-8081-4c7a-80c4-cd95eea61cd4>
CC-MAIN-2016-26
http://www.marshall.edu/LEGO/RTI-SENSORSCITYdoc.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403508.34/warc/CC-MAIN-20160624155003-00014-ip-10-164-35-72.ec2.internal.warc.gz
en
0.92488
517
2.546875
3
FAYETTEVILLE, Ark., March 30 (UPI) -- Earlier this year, scientists at Caltech offered the most convincing evidence yet of a ninth planet, Planet X. Now, a retired astrophysicist suggests the hidden planet is responsible for Earth's periodic mass extinctions -- like the disappearance of the dinosaurs. In a new study published in the journal Monthly Notices of the Royal Astronomical Society, Daniel Whitmire argues that an undiscovered ninth planet triggers disruptive comet showers every 27 million years. It's not the first time Whitmire -- now a math teacher at the University of Arkansas -- has made such a claim in a major scientific journal. In 1985, he offered a similar explanation for mass extinctions in the journal Nature -- then an astrophysicist at the University of Louisiana at Lafayette. Whitmire and his research partner John Matese pointed to evidence of periodic comet showers in the fossil record dating back some 500 million years. In 1985, there were two alternative theories for what might trigger major comet showers -- a sister star to the sun, vertical oscillations of the sun as it orbits around the center of the Milky Way. Those theories have since been discredited, while the Planet X theory has acquired legitimacy. The Caltech study estimated Planet X to be approximately 10 times the mass of Earth, big enough to throw comets into the inner solar system as its oblong orbit sends it closer to the Kuiper Belt every 27 million years. The Kuiper Belt is a ring-shaped region of comets and other larger bodies circling the solar system just beyond Neptune. Caltech researchers inferred the existence and path of a ninth planet by studying anomalies in the orbits of several major Kuiper Belt objects. Whitmire suggests -- as they did in 1985 -- that a periodic invasion of comets results in violent collisions. Those that miss Earth disintegrate in the inner solar system and dim the sun's solar energy, cooling Earth. Whitmire is hopeful additional evidence of Planet X can offer more answers about the evolution of the solar system and life on Earth. "I've been part of this story for 30 years," he said in a news release. "If there is ever a final answer I'd love to write a book about it."
<urn:uuid:d3a2a597-4143-4262-ac0e-7d82f6609a41>
CC-MAIN-2016-26
http://www.upi.com/Science_News/2016/03/30/Is-Planet-X-to-blame-for-Earths-mass-extinctions/6771459342170/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00093-ip-10-164-35-72.ec2.internal.warc.gz
en
0.925657
461
3.5
4
Maximilien Brice / CERN Components of the ALICE detector spread out like sunbeams during the integration of the device's inner tracker in March 2007. ALICE is one of the four main detectors at the Large Hadron Collider. The advance buzz over the world's largest atom-smasher is reaching a steady hum, and the date for the Large Hadron Collider's official premiere in Europe is due to be announced as early as this week. The first all-around injection of proton beams is expected in September - at just about the time that a federal judge in Hawaii considers a case claiming that the darn thing could destroy the world. Meanwhile, the LHC's older, less powerful rival - the Tevatron at Fermilab near Chicago - has announced discoveries that suggest the Americans could yet steal some of the Europeans' thunder. Eventually, the 17-mile-round (27-kilometer-round) Large Hadron Collider will smash opposing beams of protons with the energy of two bullet trains traveling at 100 mph. At those energies, previously undetected physical phenomena could pop out - ranging from the Higgs boson (which is thought to give subatomic particles mass) to microscopic black holes (which scientist have repeatedly said pose no danger) to supersymmetric particles (which could point the way to invisible dimensions of space and/or explain dark matter). The project was conceived decades ago and has been under construction for five years. The startup schedule has been repeatedly delayed - from last November, to this spring, to this summer. But now Europe's CERN particle-physics center is focusing down on the final stage of preparations, and the superconducting collider magnets have been cooled down nearly to their target temperature, just 1.9 degrees Kelvin. That's colder than the background temperature of outer space. Last week, CERN spokesman James Gillies told Physics World that he expected to announce the timing for the first beam injection - also known as "Red Button Day" - sometime this week. The current best guess is that Red Button Day will come during the second week of September, but we'll have to stay tuned for the official word. Doomsday lawsuit due for hearing By that time, a lawsuit filed against CERN as well as the U.S. Department of Energy may well get its day in court. The suit, filed in March by two critics of the LHC, contends that the collider could destroy the world if it creates micro black holes, strangelets or other weird phenomena. The critics want the court to block LHC operations, while federal lawyers want the suit dismissed. Both sides are supposed to file additional briefs in the case over the next couple of weeks. A court hearing is scheduled Sept. 2, and the judge could conceivably render a ruling by Red Button Day. Red Button Day will be the big day for news coverage but only one step in the startup process. It may well take until next year for the proton collisions to reach full power. The race to find the Higgs boson (or not) A little conflict adds spice to any blockbuster, and the court battle between the LHC's critics and its defenders isn't the only source of drama: Rival researchers at Fermilab are hoping to achieve a breakthrough before the European collider overtakes them. Back in 1995, Fermilab's scientists announced that they had detected the last undiscovered quark, the top quark. Now the biggest quarry in particle physics is the Higgs boson, the last undiscovered fundamental particle whose existence has been predicted by the grand theory known as the Standard Model. The Higgs boson is thought to give rise to a field that selectively endows some particles (like protons) with mass, while letting other particles (like photons) go massless. Physicists believe the Higgs boson may or may not be detectable at Fermilab's Tevatron collider, depending on how heavy it is. The latest word from the lab is that they're pretty sure how heavy it isn't. An analysis of collisions shows (to a 95 percent confidence level) that the Higgs boson can't have a mass around 170 GeV/c2, a measurement unit that reflects Einstein's E=mc2 formula for energy-mass conversion. |The DZero experiment is one of the detectors at Fermilab's Tevatron collider. "We're pretty energized about this," said Darien Wood, spokesperson for Fermilab's DZero experiment, who seemed hardly aware of the pun as he spoke it. Fermilab's scientists soon expect to widen the no-Higgs zone, going down to 165 GeV/c2 and up to 175 GeV/c2. That would eliminate additional hiding places where the Higgs might lurk. "These results mean that the Tevatron experiments are very much in the game for finding the Higgs," Pier Oddone, Fermilab's director, said in a news release issued today. The strategy is to eliminate so many potential mass ranges that you can't help but find the Higgs by focusing on the ranges that are still open. It's like finishing up a jigsaw puzzle by trying all the leftover pieces until you come across the ones that fit. "You're setting these limits, and at some point you don't get limits. You don't move," Wood explained. "That's one of the first indications of the signal. ... Ideally, we would hit one of these masses where the Higgs exists." Previous experiments have indicated that the Higgs mass should be between 114 and around 200 (maybe even less) on the particle-mass scale. Other findings, announced just last week, indirectly suggest a much narrower range of 115 to 135. All this assumes that the Higgs actually exists, of course. If it doesn't, then the Standard Model might turn out to be somehow substandard. Theorists would have to go back to the drawing board. And that could be the most exciting outcome of all. Update for 8 p.m. ET Aug. 4: Do references to GeV/c2 make your eyes cross? Are you looking for something fun? Last week I linked to Kate McAlpine's "Large Hadron Rap," and the online exhibit at The Big Picture is also worth checking out. If you like your LHC images unfiltered, click on over to the collection at the CERN Document Server. Update for noon ET Aug. 5: CERN spokesman James Gillies confirmed that the first beam injection is due to come sometime in the first two weeks of September, and he hoped to be able to announce the exact date this week. Although some of the temperature readings from the collider ring's sectors are bumping around above 1.9 degrees Kelvin, "the machine is basically cold now," he said. The next big step in the testing is to check the injection kickers, part of the magnet system that feeds proton beams into the collider. This weekend, engineers will check the final magnet that "sends particles into the LHC vacuum pipe" for the clockwise proton beam, Gillies said. Some consider that to be a notable step because protons will be zipping into the LHC itself, though just in one sector. Later this month, the injection kicker for the counterclockwise beam will be tested. The big day comes when protons first make the entire 17-mile route around the ring.
<urn:uuid:3d221281-4916-422e-b2d9-48a03b18e0e6>
CC-MAIN-2016-26
http://cosmiclog.nbcnews.com/_news/2008/08/04/4351511-sciences-summer-blockbuster?pc=25&sp=25
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398516.82/warc/CC-MAIN-20160624154958-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.951816
1,565
2.625
3
Faith-Leaders Play a Key Role in Stopping Domestic ViolenceFaith leaders have a unique opportunity to promote primary prevention messages throughout their community. Fulfilling the responsibility for the spiritual and emotional wellbeing of members, a faith-leader can: - Speak out against domestic violence – make it a part of a sermon - Address behaviors that help condone violence against women and girls - Promote attitudes that contribute to equality in relationships Faith communities have a tremendous influence in people’s lives and can use it to engage in prevention strategies: - Challenge behaviors that lead to domestic violence - Offer a framework for social justice – a culture free of violence - Partner with the local domestic violence program to assist with their prevention efforts - Use holidays and special events to raise awareness of domestic violence and send messages that counter it and promote peace and equality within relationships The Support of a Faith Leader is Key to the Safety of VictimsWhen religion or faith is a deeply-held belief, faith leaders can be a resource for victims who are trying to understand what is happening to them and plan for their future safety. A faith leader can: - Help victims explore ways to escape a partner's violence and abuse. - Help those who abuse take responsibility for their actions. Resources (Primarily Intervention-Focused) It seems that many domestic and sexual violence prevention programs for religious and spiritual settings are focused on Intervention (after the violence has happened), rather then Primary Prevention (before the violence occurs). Such intervention efforts also tend to engage the faith leader rather than the entire faith community. But some organizations are also working toward ending the violence before it starts. American Jewish World Service – Their Advocacy Work is inspired by the Jewish commitment to justice, and works to realize human rights and end poverty in the developing world. Institute on Domestic Violence in the African American Community (IDVAAC) focuses on the unique circumstances and life experiences of African Americans as they seek resources and remedies related to the victimization and perpetration of domestic violence in their communities. IDVAAC recognizes the impact and high correlation of intimate partner violence to child abuse, elder maltreatment, and community violence. Jewish Women International (JWI) is the leading Jewish organization empowering women and girls – through economic literacy; community training; healthy relationship education; and the proliferation of women’s leadership. Innovative programs, advocacy and philanthropic initiatives protect the fundamental rights of all girls and women to live in safe homes, thrive in healthy relationships, and realize the full potential of their personal strength. The Faith Trust Institute has an extensive bibliography (search for "primary prevention." It is a resource for congregations, clergy and other religious leaders, secular and faith advocates, counselors, victims and survivors or others seeking understanding of religious issues and sexual and domestic violence. The Institute can also provide faith leaders with guidance on sermon content. The National Online Resource Center on Violence Against Women (VAWnet) Special Collection, Religion and Domestic Violence contains numerous resources for use by faith-leaders. Washington State Coalition Against Domestic Violence has its Religion and DV: Let's Talk About God resource for use by advocates. Alert! Computer use can be monitored. Review these safety tips to learn more. Click the red quick escape button above to immediately leave this site if your abuser may see you reading it. Resources for Faith Leaders available from PCADV: Professional Resources for Faith Leaders (General Resources About Domestic Violence) Helping Rural Battered Women and Their Children: A Guide for Faith Leaders and Religious Communities Information designed for faith leaders in rural communities, but can be used by faith-leaders elsewhere. Okayama Dopke, C. (2002) Creating Partnerships with Faith Communities to End Sexual Violence Washington Coalition of Sexual Assault Programs Todhunter, R., Dissertation (2009): The Relationship Between Religious and Spiritual Factor and the Perpetration on IPV Wasserman Shultz, D. (2013) Chag v’ Chesed: Holiday Dvar Tzedek, Passover 5773, American Jewish World Service
<urn:uuid:5393fbb8-f57f-4cb0-a5ac-6a819ae1e6b0>
CC-MAIN-2016-26
http://www.pcadv.org/Learn-More/Prevention/Resources/Faith-Based/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392527.68/warc/CC-MAIN-20160624154952-00130-ip-10-164-35-72.ec2.internal.warc.gz
en
0.906989
832
2.625
3
Durgabai Deshmukh distinguished herself as a fearless freedom fighter and a dedicated social worker. Popularly known as the 'Iron Lady' she was born on July 15, 1909 at Rajahmundry (in Andhra Pradesh) in a middle class family. She did not have access to formal education initially. But it was due to sheer determination to educate herself that she obtained a bachelor's degree from Andhra Pradesh. Later she studied law and began practicing at the Madras High Court. After independence she joined the Supreme Court Bar. Durgabai's patriotism was recognised in 1930 when the Salt Satyagraha was launched. She, with the help of two other prominent nationalists (A. K. Prakasam and Desodharaka Nageswararao), organised the movement in Madras. She was arrested and imprisoned for her involvement in a movement that had been banned. She continued with her anti-British activities even after her release. In 1946, Durgabai shifted to Delhi. She became a member of the Constituent Assembly and used her potential in framing the constitution. In 1952, Durgabai contested the general elections but failed to win. However, in recognition of her selfless service to the nation she was awarded the Tamrapatra. Several of her unique achievements were in the field of social work. She realised that the progress of a country was entirely dependent on the emancipation of the masses. It was for this reason that she gave utmost priority to social reconstruction. Because of her concerted efforts, the Andhra Mahila Sabha was set up in 1941 for the welfare of women. Later, several branches of this sabha were opened in different parts of the country. Durgabai also edited a journal known as Andhra Mahila and inspired women to rebel against meaningless social constraints imposed on them. Comprehending the value and role of education in bringing about social change, she set up the Andhra Education Society. Sri Venkateswara College in the University of Delhi also owes its origin to her. Further, she played a pioneering role in the setting up of the Central Social Welfare Board. She was awarded the Paul Hoffman Award for her contribution to social work. Durgabai Deshmukh died on May 9, 1981.
<urn:uuid:687578b3-0641-432c-ae49-2dcd95745317>
CC-MAIN-2016-26
http://www.preservearticles.com/201104235821/biography-of-durgabai-deshmukh.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783408840.13/warc/CC-MAIN-20160624155008-00005-ip-10-164-35-72.ec2.internal.warc.gz
en
0.98828
467
2.90625
3
The future of the nanotechnology field depends on our ability to reliably and reproducibly assemble nanoparticles into 3D structures we can use to develop new technologies. According to Hao Yan and Yan Liu at Arizona State University, the greatest challenges in this burgeoning field include control over nanoscale 3D structure and imaging these tiny materials. "The ability to build predicted structures and provide experimental feedback to current theories is critical to the nanotechnology field," said Yan. One approach to production of nanoscale architecture is creation of nanoparticles that assemble themselves into the desired structure. DNA molecules are an elegant biological example of small particles that self-assemble to form higher order 3D structures. The design of the DNA scaffold system permits formation of a variety of tubular structures carrying 5 nm AuNPs (gold particles). Researchers observed formation of tubes displaying patterns of AuNPs in stacked rings, single spirals, double spirals, and nested spiral tubes. This TEM image shows all four of these conformations.Credit: Hao Yan, Arizona State University Inspired by this prototype, Yan and colleagues looked to Mother Nature to solve their nano-sized problem. They attached gold nanoparticles to DNA, taking advantage of its self-assembling biochemical properties to engineer nanotubes that form a number of different 3D structures. The researchers manipulated nanotube size and shape by changing the size of the gold particles attached to the DNA or the DNA structure itself. Anchi Cheng at the Scripps Research Institute contributed to the project by imaging the 3D conformations of nanotube structures using cryo-Electron Tomography Yan is hopeful this groundbreaking work will serve as the foundation on which emerging fields and new technologies may be built. "Now that we have methods to alter the periodicity, diameter and chirality of nanotube formation, we can use what we have learned to control hierarchical assembly of these building blocks to create more complex 3D structures," he said. In the future, use of nanotubes may reduce the size of cell phones and other electronic devices even further. Scientists also envision using nanotubes for a number of biological applications including gene and drug delivery. Drugs or other treatments specifically delivered using nanotubes would target only affected tissues, potentially eliminating toxic side effects. Funding from the National Science Foundation. - PHYSICAL SCIENCES - EARTH SCIENCES - LIFE SCIENCES - SOCIAL SCIENCES Subscribe to the newsletter Stay in touch with the scientific world! Know Science And Want To Write? - Better Brains With Beer - How A Former Naturopath Can Help Unravel The Trickery of Alternative Medicine - Finding All-Hadronic Top - Again - Brexit, the EU Now Has its Puerto Rico. - Tidal Disruption Event: Black Hole Eats Star, Beams Signal To Earth - 9,000 Years: Origin Of Farmed Rice Gets Pushed Back - Psychiatric Diagnoses Not Valid For African-Americans, Says Sociologist - " And all that science, I don't understand - it's just my job five days a week Pretty sure that..." - "Good news for asteroid defense (it's not new news, but I only found out about it just now), this..." - "Okay, first, this planet X, if it exists, always remains outside of Neptune if it exists - you..." - "UPDATE - most recent paper has semimajor axes of ~380–980 AU, and masses of ~5–20..." - "A better picture might be obtained by reading this article from an Irish journalist based in Russia..." - What Happens To A Soccer Player’s Brain After Missing A Penalty Kick - It’s Back to Shots for Flu Prevention - ACSH Applauds Media Awareness of the Fentanyl Crisis - Counting Bites Examined, to Help Decrease Food Intake - The Safe And Unsafe Nutty Treats For Your Pup - Mr. Potato Head Needs a New Warning Label! - Should I stay or should I go? - New cancer immunotherapy drugs linked to arthritis in some patients - Simulations foresee hordes of colliding black holes in LIGO's future - Analysis of genetic repeats suggests role for DNA instability in schizophrenia - Analysis of media reporting reveals new information about snakebites and how and when they occur
<urn:uuid:0fb61dde-33dc-4630-911d-f9007670a216>
CC-MAIN-2016-26
http://www.science20.com/news_releases/self_assembling_nanotubes_get_inspiration_dna
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.885575
915
3.734375
4
I’ve been hearing the word “multi-modal” thrown around a bit carelessly lately. As in “Cincinnati needs a multi-modal transportation system” or “people want to be able to choose from multiple modes of transit”. I think this line of thinking has in many cases overshot it’s original intent and gone to a place that’s slightly harmful to a reasonable conception of the best way to supply transportation. But first, what was the original intent? Multi-modal means that there is more than one “mode”. A mode here is meant to mean a vehicle type, such that a list of modes might read: - tractor trailer - flying monkey - ski gondola “Multi-modal” seems to have started1 as a critical term addressing car culture…”I think the airport needs to be accessible by multiple modes” would mean that it’s being accessible by only car is unacceptably limited. It seems to have grown legs in some circles though. I’m not sure anyone would admit to holding the position I’m about to define, but I’m definitely sensing the word being used in this way by quite a few people locally and nationally: “Multi-modal” is starting to be applied to transit systems alone such that “Cincinnati needs a multi-modal transit system” means that Cincinnati should provide more choices than buses to people using the transit system. It means that subways should be provided and perhaps also streetcars so as to improve “choice” and “provide more options”. The analogy between the first definition and the second is subtle but disturbing. Cars and buses are different in kind while buses and streetcars are different in degree. In the first case, the car “mode” is owned exclusively by and fully directed by the user, while the bus is not. Streetcars and buses though are merely variations on a theme: the concept of public transit. Streetcars and buses may be apples and oranges, but buses and cars are apples and…cars. The first are both fruit, different though they may superficially be. Cars and bicycles are a closer analogy. We might even include walking in there. In any case, the traveler owns and controls the means fully. It’s not a shared vehicle with a set path, but one that can go any which way the “driver” likes. It might be useful to say that if we can provide access to bicycles, it would be good also to provide access by car and by foot as well. Whatever we’re talking about is likely accessible to one if the other. But to say that if we can provide access by bus then it would be better to provide access by bus and subway and streetcar doesn’t quite hold up as well in our case. I’m willing to say that this IS true in the case of intercity travel where travelling by plane can be a major but quick pain, travelling by train a deliciously slow luxury, and by bus a happy medium. In these cases, the differences between the vehicles are exaggerated by time and distance such that they become a difference of kind. A trip across the country by train is so different from a trip by plane that in the terms of subjective experience it can’t quite be compared. I’ve made many friends and even had a fling(!)2 on a train, but I almost never speak to people on a plane. When we’re looking at local trips though the difference is not so great. If we’re trying to get from Downtown to Clifton Heights, the longest it could possibly take is 20 minutes including waiting time. At this scale our primary interest is speed rather than comfort. We’d barely get the seat warm on a five minute ride. At the local scale, the position that vehicle choice is somehow choice itself seems to deny other much more important aspects of functional transit like - Where the line actually goes - When it goes there - How often it goes there - How quickly it does it - How much it costs The nature of the vehicle itself (and really the difference is minor between a bus and a streetcar) is a consideration to be taken into account when the ability to make a trip to the place you want to go at a reasonable cost and at the time you want is already taken for granted. The consideration of comfort is secondary to functionality. I can prove this with the example of roller-coasters. They’re tremendously fun(comfort) but utterly useless as transit(practicality). A roller-coaster, move you though it might, is just not transit. To say that we need a multi-modal transit system, with multi-modality as a goal or objective itself, is to put the cart before the horse. It’s like saying we need to buy a whole bunch of kitchen equipment before we have any idea what we’ll be cooking. One last analogy before I go to bed: A coral reef is diverse, and that diversity makes it strong and resilient and even beautiful. But not a single one of the millions of parts of that system came about for those reasons. Each organism exists in it’s glorious eccentricity for the incredibly simple purpose of living. Whatever form each takes was the most contingent for it’s simple purpose. We need not set out to make clown fish, but merely trust that they will arise to surprise us if we pursue our simple purpose: effective transportation. - at least in the context I’m concerned with here. I think it may have originated in freight transportation to refer to ships, planes, trains and trucks, particularly as they move shipping containers that are transferable easily between modes. Anyone care to check that for me? ↩ - Could I claim to be a member of the “meter high club“? The trip between Chicago and St. Louis is never so memorable as when someone walks by in the lounge car, turns back to tell you you have beautiful eyes and you proceed to talk intimately for the next 7 hours because you’ll never see each other again…sigh….oh Matthew. ↩
<urn:uuid:4e677b29-a9ca-4f56-aa3e-004aaef3be1f>
CC-MAIN-2016-26
http://cincymap.org/blog/multi-modal/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00129-ip-10-164-35-72.ec2.internal.warc.gz
en
0.969635
1,311
2.6875
3
North American bats are facing a tough new millennium. 600,000 per year are already killed due to government subsidies of wind energy and so far 7 million have died due to White Nose Syndrome. While we are likely stuck with wind energy for the foreseeable future, there is hope for White Nose Syndrome. Scientists have discovered that the deadly WNS fungus can survive in caves with or without the presence of bats. Our flying mammal friends serve as food plant pollinators and they keep the insect population under control. They even have value in medical research, particularly as it pertains to blindness. But it's the insects people think about the most: a single bat can eat thousands of insects in a single night and so they are critical to controlling bugs that threaten agriculture and forestry; estimates are often somewhat made up but if it helps, their pest-control value to the economy is estimated in the billions of dollars. The new research, led by University of Akron associate professor of biology Hazel Barton, identifies cold-loving, cave-dwelling fungi closely related to WNS, and where and how they spread, and how they survive. These findings could help predict the future of North American bats —among them — the common Little Brown Bat, first seen with White Nose Syndrome in Ohio in March 2011. White Nose Syndrome appears as a white, powdery substance on the muzzles, ears and wings of infected bats and gives them the appearance they've been dunked in powdered sugar. Since it was first discovered in hibernating bats in New York in winter 2006-07, WNS has spread across 22 states, including Ohio. In Vermont's Aeolus Cave, which once housed 800,000 bats, WSN wiped out the hibernation den's entire population. In "Comparison of the White-Nose Syndrome agent Pseudogymnoascus destructans to cave-dwelling relatives suggests reduced saprotrophic enzyme activity," published in PLOS ONE, Barton and UA post-doctoral fellow Hannah Reynolds compare two closely related fungi species and reveal common threads, including the discovery that the related fungi share the same nutritional needs. Originally satisfied by cave soil, the fungus' nutritional source has now transferred to bats. Barton and her colleagues are zeroing in on when the fungus transferred from environment to bat and the consequences of the fungus' relentless ability to survive solely in caves, uninhabited by bats. "The jump from the environment to the bat has come at the expense of some ability for Pd to grow in the environment, but not entirely," says Barton, who adds that the fungus still retains enough function to grow exclusively in caves in the absence of bats. "The ability of the fungus to grow in caves absent of bats would mean that future attempts to reintroduce bats to caves would be doomed to failure," she says. Ongoing research in Barton's UA lab continues to examine the sustainability of WNS to help determine the future of bats amid the deadly disease.
<urn:uuid:6f39fd19-53fb-4692-9940-ccef27e16b7b>
CC-MAIN-2016-26
http://www.science20.com/news_articles/bat_killer_white_nose_syndrome_fungus_can_survive_caves_even_without_bats-128680
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00064-ip-10-164-35-72.ec2.internal.warc.gz
en
0.958986
608
3.90625
4
Understanding Clinical Research Clinical research studies are a means of understanding diseases and of developing new treatments based on direct study and observation of people. When patients and healthy volunteers enroll in a clinical protocol or clinical trial, they become partners with members of the research team who are experts in their disease and who are searching for better ways to understand and treat disease. Their participation is essential for helping others with the same or similar diseases, both for today and in future generations. At Rockefeller University Hospital patients can take part in clinical studies covering a wide range of medical diseases and conditions (see Clinical Studies and Protocols). The Clinical Research Support Office provides Research Subject Advocacy services to research volunteers, and to research support services to research teams at Center for Clinical and Translational Science. Choosing to participate in clinical research is an important personal decision. Participation in a study is predicated on voluntary enrollment and a full understanding of what is involved in the study. This informed consent is a prerequisite for any study, and includes understanding the risks and benefits of taking part, what kinds of tests and treatments will be done, and how much time it will take. Interpreter services are available 24 hours a day/7 days a week in many languages for non-English speaking research volunteer/participants. To participate, patients and healthy volunteers must meet certain requirements that are different for each study. These inclusion/exclusion criteria are not used to reject people personally but rather to identify appropriate participants and to keep them safe. The Rockefeller University Institutional Review Board reviews and approves every new study before it can begin, to ensure scientific rigor and to protect the rights and welfare of the study partcipants. To take part, patients need to feel well informed, confident, and secure about participating. Speak to family members, doctors, researchers, and hospital staff to see if participation in clinical research is appropriate for you. Information on Rockefeller University Hospital clinical research studies can be found at Clinical Studies and Protocols.
<urn:uuid:a4f7c055-acfc-4239-9aeb-165bbfeced50>
CC-MAIN-2016-26
http://www.rucares.org/patientsvolunteers/understand
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396455.95/warc/CC-MAIN-20160624154956-00116-ip-10-164-35-72.ec2.internal.warc.gz
en
0.93535
403
2.90625
3
Once Upon a Time Based on the story's setting, choose 3 things that you found interesting about the time and setting of The Last Command. Describe what kind of an experience you think it would be to live in that time period and galaxy. Would you like it? Why or why not? Interview with a Jedi Write out an interview with Luke Skywalker, including both the questions and his responses. The interview can be done at any time during his life, and you can use the information in the book to help speculate about his answers. Using the planet names, ships, and battles in the book, create a crossword puzzle. Be sure to include both the questions and answers along with how they fit in the puzzle. Split the class into groups for this activity. In each group, a person will act out a character from... This section contains 797 words (approx. 3 pages at 300 words per page)
<urn:uuid:f05ae0cf-3c96-44d2-8c77-ecdb16789910>
CC-MAIN-2016-26
http://www.bookrags.com/lessonplan/last-command/funactivities.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396222.11/warc/CC-MAIN-20160624154956-00172-ip-10-164-35-72.ec2.internal.warc.gz
en
0.959072
191
3.015625
3
The existence of a parent-child relationship is one of the foundations on which separate families and particular family composition categories are identified. It only refers to relationships between people usually resident in the same household. It includes relationships in which people actually report a parent-child relationship on the Census form (including being an adopted child or a foster child of an adult), as well as some designated relationships (i.e. for children aged less than 15 years who do not otherwise have a parent in the household, in which case a nominal parent/child relationship is established). An individual may be (of household members) both a parent and a child at the same time (for example, a person could live with their father or mother and have a child of their own). If a child in a household is also identified as being a parent, then precedence is given to the person's role as a parent for family composition coding purposes. See also Child, Family, Family Composition (FMCF), Parent.
<urn:uuid:6079f8b5-d9d3-4825-a3d6-98c6c759cffb>
CC-MAIN-2016-26
http://www.abs.gov.au/AUSSTATS/[email protected]/Previousproducts/EB64B494C0EF7E13CA25720A007DBC4B?opendocument
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393518.22/warc/CC-MAIN-20160624154953-00005-ip-10-164-35-72.ec2.internal.warc.gz
en
0.968291
201
2.71875
3
- Historic Sites Palmetto Fort, Palmetto Flag Aided by certain residents of South Carolina, Colonel Moultrie built a fort, beat a British fleet, and started an enduring legend of valor October 1955 | Volume 6, Issue 6 Leaving nothing to chance, Moultrie daily made personal reconnaissance of the situation and when the British ships swung towards the Charleston channel in that fateful June dawn, he saw the proceedings from an observation post he had established three miles from the fort. Watching the loosened topsails of the leading frigates swell with the first morning breeze, he also spied Clinton’s landing party making toward Long Island. Vaulting into his saddle Moultrie stretched his horse at full gallop for the fort. Foam-flecked and breathless he raced through the gate, shouting for the drummer of the guard to beat the long roll. He was none too soon, for as the call to arms sent the gun crews sprinting to their posts, the first of the towering English ships came gliding up abreast of the ramparts. She was the 28-gun frigate Actaeon and behind her in stately procession followed the 50-gun flagship Bristol and her sister ship of the line, Experiment, with another 28-gun frigate, Solebay, completing the first division; next were the 28-gun frigates, Sphinx and Syren, and lastly the mortarboat, Thunderbird, chaperoned by the frigate, Friendship, also 28 guns. So thorough had been Moultrie’s estimate of the expected attack that he had even anticipated and prepared for the hostile sortie from Long Island. When Clinton started to ford The Breach as the first broadside from the ships roared into thundering echoes across the islands, he found a mixed group of colonial infantry and artillery waiting for him on the further shore. Nor was this the only surprise in store for the royal general. No sooner did his men push their boats out into the inlet than they ran aground on hidden sandbars. Tumbling overboard, the heavily-burdened ranks tried to get forward on foot, but immediately sank over their heads in unexpected hollows among the shoals. Then the waiting Americans opened up with bullets and round shot, and there was nothing for the raging Redcoats to do but go splashing back as best they could to the shore they had just left. And there they remained for the rest of the day, inactive except for a ceaseless struggle against the swamp mosquitoes. At the other end of Sullivan’s Island the battle had been joined more in accordance with Clinton’s original schedule. The Actaeon led the fleet up the channel, battle flags streaming, lofty tops aswarm with marine sharpshooters, their muskets poised to pick off the Yankee gunners hidden from the ship’s decks behind the fort’s ramparts. But Moultrie had thought of that too, and the eagle-eyed “jollies” found to their chagrin that the ramparts were of such height and width that they effectively screened any aerial view of the colonial troops beneath them. Abreast of the fort the Actaeon let go her anchor; in her wake the flagship Bristol followed suit, then the Experiment and finally the Solebay. Hardly had the vessels’ headway stopped when, as if touched off by a common fuse, their broadside batteries flared across the water in one simultaneous and concerted blast. The firing platforms inside the fort shook as from an earthquake; solid shot rained on the ramparts where they sank ineffectually into the soft palmetto logs, or buried themselves in the sand. In the momentary pause that followed this initial action the cheers of the American gun crews could be plainly heard on the attacking craft, mingled with taunting laughs and raucous warnings. Echoing this defiance the fort’s guns spoke slowly, one by one. The Yankee magazines held little ammunition; powder and ball must be carefully husbanded until additional charges could be unloaded from the supply schooner that was even then moored behind the island. But what Moultrie’s fire may have lacked in quantity it made up in effectiveness. The great ships shivered from the impact of the iron balls loosed against them at such short range; splinters flew and water spouted and suddenly the Bristol was seen to yaw and shift out of line. A lucky shell from the fort had cut her anchor cable and the tide slewed the mighty bulk across the channel with the unprotected stern facing the fort’s cannon. Such a golden opportunity for destructive action could not be missed, nor was it. Eager gun sections rushed up their needed fresh supplies of powder and shot, rammed the charges home, and engulfed the hapless flagship in a wave of fire. Her mainmast tottered and crashed over the side, followed by her mizzen. Along her cluttered main deck, through her sturdy upper works, Moultrie’s men swept a stream of cannon and musket fire as with the spray from a hose. Finally, broken and all but sinking, the once proud leader drifted out of harm’s way with heavy casualties.
<urn:uuid:a75c65c0-5f11-4165-be85-175e66ce3870>
CC-MAIN-2016-26
http://www.americanheritage.com/content/palmetto-fort-palmetto-flag?page=3
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00081-ip-10-164-35-72.ec2.internal.warc.gz
en
0.965732
1,091
2.875
3
Osteoporosis is a condition in which bones become fragile and can break easily. Even though it is most often associated with women, men can also develop osteoporosis. In fact, estimates based on data from the CDC indicate that by 2020, 3.3 million men will have osteoporosis. For osteoporosis resources from FNIC, go to Diet and Disease > Osteoporosis. NIH provides more information on osteoporosis and its prevention:
<urn:uuid:51dd72d1-3f7b-4f9e-a7af-c3f9092b3711>
CC-MAIN-2016-26
https://fnic.nal.usda.gov/faq
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00111-ip-10-164-35-72.ec2.internal.warc.gz
en
0.962254
101
3.234375
3
Even healthy pregnant women can be at risk for pregnancy problems caused by oral bacteria. Researchers from Case Western Reserve University began to understand which bacteria from the 700 species living in the mouth are responsible for the growing health problem of preterm and stillbirths. Yiping Han from the department of periodontics in the CWRU School of Dental Medicine led the study, which found several new bacteria originating in the mouth travel through the blood to cause an inflammatory reaction in the placenta and eventually cause a range of health issues from miscarriages to stillbirths. The findings were reported this month in Infection and Immunity. Pregnant women with or without mild oral health problems have baffled researchers as to why oral bacteria have shown up in the placenta or amniotic fluids of premature or stillbirths. The researchers found that after injecting the tails of pregnant mice with saliva from healthy people and dental plaque from those with periodontal disease, oral bacteria continued to grow in the placentas after it had left the blood 24 hours later. Prior to Han's work in connecting oral bacteria to the problems in pregnancy, it was thought that infections were transmitted through the vaginal tract. Information from Han's previous studies over the past decade shows that oral bacteria can be transported through the blood when there is a cut in the mouth's lining or an oral health problem like gingivitis or periodontitis which breaks down the defenses in the mouth's lining that protect bacteria from entering the bloodstream. According to Han, this suggests that even healthy pregnant women should be concerned that normally occurring bacteria in the mouth can enter the blood stream and make their way into the placenta's immune-free environment to ignite an inflammatory reaction that can lead to premature or stillbirths. "We found many bacteria did locate to the placenta, but they were not the most famous periodontal pathogens," said Han. "In fact, many of the bacteria were the kind that are found in healthy people's mouths." These include Streptococcus, Leptotricia, Fusobacterium nucleautm, Veillenella, among others. The researchers are finding that many of the bacteria found in the placentas cannot be grown in the lab, which has been the gold standard. They are identified through DNA cloning techniques that match the bacteria in the placenta with the bacteria found in the mouth. This DNA fingerprinting allows researchers to trace the origin of the bacteria. Hans notes that as long as these bacteria stay in the mouth, they cause very little problems. However in the uterus, they stimulate the inflammatory response that leads to cervical and membrane weaknesses and ruptures and uterine contractions. In several case studies, Han said the mothers did not have a pronounced periodontal disease or periodontitis. The mothers did have a form a pregnancy-associated gingivitis, which resulted from changes in the hormones, and disappears after the birth of the baby. "The normal healthy woman is under risk," Han said. "People should be concerned about it. This is what the experiment is showing." She added, "We need to know which bacteria colonize in the placenta and design therapies for better treatments." These are the kinds of bacteria that are with us all our lives, and only cause disease when the opportunity arises, Han said. She added, "What is happening with the oral bacteria colonizing in the placenta happens with other diseases that triggers an inflammatory response." During CWRU Research ShowCASE 2010, Yann Fardini, one of the paper's researchers, received honors for his presentation on the research. |Contact: Susan Griffith| Case Western Reserve University
<urn:uuid:cf2b3118-c3c8-498b-a0a5-3103a7a7c5bd>
CC-MAIN-2016-26
http://www.bio-medicine.org/biology-news-1/Even-healthy-pregnant-women-need-to-worry-about-oral-bacteria-13347-1/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783404826.94/warc/CC-MAIN-20160624155004-00159-ip-10-164-35-72.ec2.internal.warc.gz
en
0.956894
766
3.34375
3
As far as I know, Franklin Delano Roosevelt was our first President frequently referred to by his initials, as in FDR. I assume this came about due FDR's long last name having 3 syllables. I know some people, like Richard Nixon, refer to FDR's cousin Teddy as TR, but I don't know if Teddy Roosevelt's contempories did. FDR's successor, Harry S Truman, did not become known as HST. The S between Harry and Truman is not the first letter of his middle name. Truman's middle name is a middle initial. With no period after it. Why, I do not know. Truman was followed by a President with a long last name, but he did not become known as DDE. Instead he was known as Ike. Ike was Dwight David Eisenhower. Ike sounds better than saying DDE. Ike was followed by JFK. Who was followed by LBJ. LBJ had a fairly short last name. But I think people liked the sound of saying JFK and so they segued easily into Lyndon Baines Johnson being LBJ. It worked great for anti-war chants, as in "Hey Hey LBJ. How many kids did you kill today?" LBJ was to be our last President known by his initials. Nixon followed him and while there were some instances of him being referred to as RMN, it just did not stick. Mostly, I suppose, because Nixon is a nice short name with a punch to it when said aloud, like Hitler. Nixon was followed by Ford. Again a short name. Then Carter. Again short. I think Jimmy Carter's middle name is Earl. That'd make him JEC. That just looks weird. Jimmy Carter was followed by Ronald Reagan. I do remember seeing Reagan referred to as RR a time or two, but that definitely did not stick. It was just way to easy to say Reagan, a good short name, like Nixon and Hitler. Reagan was followed by Bush. No need to call him GHWB. Bush was followed by Clinton. Again a nice 2 syllable name that has a punch to it, so it was Clinton, not WJC. That would have looked to much like the initials for Water Closet. Of course, Clinton was unfortunately followed by another Bush. Who was never referred to as GWB, but sometimes the, to be retired tomorrow, Bush was referred to as W. Tomorrow the world breathes a sigh of relief as W is replaced by Barack Hussein Obama. I'm fairly certain he will not be referred to by his initials. BHO sounds too much like HBO. And if you take the middle name out you are left with BO. And that definitely would not sound Presidential.
<urn:uuid:87efb25e-d242-427e-aad2-8bde3cc182c1>
CC-MAIN-2016-26
http://durangotexas.blogspot.com/2009/01/fdr-ike-jfk-lbj-nixon-bho.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00049-ip-10-164-35-72.ec2.internal.warc.gz
en
0.992637
562
2.5625
3
A person who is learning a subject or skill: a fast learner More example sentences - Surround those slow learners with fast learners who understand how to promote their creative ideas. - I repeated it easily; I'd always been a fast learner with languages. - For a learner to acquire skills in a foreign language, correctness of speech in his mother tongue should be taught. Words that rhyme with learnerAnnapurna, burner, discerner, earner, Myrna, Smyrna, spurner, taverna, turner, Verner, Werner, yearner For editors and proofreaders What do you find interesting about this word or phrase? Comments that don't adhere to our Community Guidelines may be moderated or removed.
<urn:uuid:09a11eae-bf05-4a5f-9521-5928042b2b9f>
CC-MAIN-2016-26
http://www.oxforddictionaries.com/definition/american_english/learner
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00185-ip-10-164-35-72.ec2.internal.warc.gz
en
0.922111
162
3.0625
3
How Internet Filters Impact Student Learning in High Schools A required reference for all involved in education, particularly intellectual freedom. A pioneering study of how internet controls impact student learning and teaching effectiveness. The Social Construction of Web Appropriation and Use This innovative study provides an in-depth understanding how librarians have perceived the World Wide Web from its early implementation to 2003, and how the Web is appropriated and used in libraries. The most rigorous account and investigation of online adult learning ever conducted. This book is must reading for scholars and students to understand this critical field. The Changing Psychology and Evolving Pedagogy of Online Learning A pioneering study, this monograph examines the broader implications for the design of conversational environments, whether for educational or business use. Does having Internet access at home help improve academic performance? This book answers this question with its thorough investigation. Examines and provides important insights on how the Internet influences the learning process. The book also notes that personal factors such as motivation and expertise interact with web site design to influence learning outcomes. It also sheds light on how learning from a website can be primed based on the content presented before exposure. A trailblazing study in the field of online instruction, this book is critical for all collections in Communications and Education. A much-awaited book--one of the first comprehensive investigations of the relationship between virtual charter schools and home schooling. A Case Study of a Biology Museum Online Provides important results and implications for any educator concerned with improving learning outcomes. This work is a beacon that will guide educators in their curriculum development efforts.
<urn:uuid:bf4e439a-1572-42d6-8d14-6163571fee86>
CC-MAIN-2016-26
http://cambriapress.com/cambriapubs.cfm?sa=16
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395620.9/warc/CC-MAIN-20160624154955-00133-ip-10-164-35-72.ec2.internal.warc.gz
en
0.888919
325
2.75
3
Puffing Sun gives birth to reluctant eruption A suite of Sun-gazing spacecraft, SOHO, STEREO and Solar Dynamics Observatory (SDO), have spotted an unusual series of eruptions in which a series of fast 'puffs' force the slow ejection of a massive burst of plasma from the Sun's corona. The eruptions took place over a period of three days, starting on 17 January 2013. Images and animations of the phenomena will be presented at the National Astronomy Meeting 2014 in Portsmouth by Nathalia Alzate on Monday 23 June. "Looking at the corona in Extreme UltraViolet light we see the source of the puffs is a series of energetic jets and related flares," explained Alzate. "The jets are localised, catastrophic releases of energy that spew material out from the Sun into space. These rapid changes in the magnetic field cause flares, which release a huge amount of energy in a very short time in the form of super-heated plasma, high-energy radiation and radio bursts. The big, slow structure is reluctant to erupt, and does not begin to smoothly propagate outwards until several jets have occurred." "We still need to understand whether there are shock waves, formed by the jets, passing through and driving the slow eruption, or whether magnetic reconfiguration is driving the jets allowing the larger, slow structure to slowly erupt. Thanks to recent advances in observation and in image processing techniques we can throw light on the way jets can lead to small and fast, and/or large and slow, eruptions from the Sun," said Alzate. NAM 2014 press office landlines: +44 (0) 02392 845176, +44 (0)2392 845177, +44 (0)2392 845178 Dr Robert Massey Dr Keith Smith Dr Huw Morgan Image, animations and captions Notes for editors The RAS National Astronomy Meeting (NAM 2014) will bring together more than 600 astronomers, space scientists and solar physicists for a conference running from 23 to 26 June in Portsmouth. NAM 2014, the largest regular professional astronomy event in the UK, will be held in conjunction with the UK Solar Physics (UKSP), Magnetosphere Ionosphere Solar-Terrestrial physics (MIST) and UK Cosmology (UKCosmo) meetings. The conference is principally sponsored by the Royal Astronomical Society (RAS), the Science and Technology Facilities Council (STFC) and the University of Portsmouth. Meeting arrangements and a full and up to date schedule of the scientific programme can be found on the official website and via Twitter. The University of Portsmouth is a top-ranking university in a student-friendly waterfront city. It's in the top 50 universities in the UK, in The Guardian University Guide League Table 2014 and is ranked in the top 400 universities in the world, in the most recent Times Higher Education World University Rankings 2013. Research at the University of Portsmouth is varied and wide ranging, from pure science – such as the evolution of galaxies and the study of stem cells – to the most technologically applied subjects – such as computer games design. Our researchers collaborate with colleagues worldwide, and with the public, to develop new insights and make a difference to people's lives. Follow the University of Portsmouth on Twitter. The Royal Astronomical Society (RAS), founded in 1820, encourages and promotes the study of astronomy, solar-system science, geophysics and closely related branches of science. The RAS organises scientific meetings, publishes international research and review journals, recognizes outstanding achievements by the award of medals and prizes, maintains an extensive library, supports education through grants and outreach activities and represents UK astronomy nationally and internationally. Its more than 3800 members (Fellows), a third based overseas, include scientific researchers in universities, observatories and laboratories as well as historians of astronomy and others. Follow the RAS on Twitter. The Science and Technology Facilities Council (STFC) is keeping the UK at the forefront of international science and tackling some of the most significant challenges facing society such as meeting our future energy needs, monitoring and understanding climate change, and global security. The Council has a broad science portfolio and works with the academic and industrial communities to share its expertise in materials science, space and ground-based astronomy technologies, laser science, microelectronics, wafer scale manufacturing, particle and nuclear physics, alternative energy production, radio communications and radar. It enables UK researchers to access leading international science facilities for example in the area of astronomy, the European Southern Observatory. Follow STFC on Twitter.
<urn:uuid:c32f8045-5535-4630-a55b-eef1d7807031>
CC-MAIN-2016-26
http://www.ras.org.uk/news-and-press/news-archive/254-news-2014/2470-puffing-sun-gives-birth-to-reluctant-eruption
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395621.98/warc/CC-MAIN-20160624154955-00145-ip-10-164-35-72.ec2.internal.warc.gz
en
0.899356
937
2.796875
3
Officers teach Rainbow Preschool youngsters safety Rainbow Preschoolers learned a lesson in safety last week from Baldwin City police officers. Sgt. Colleen Larson talked to the preschoolers about the importance of wearing seatbelts in the car, helmets while riding a bike or rollerblading, and always looking both ways before crossing a street. The preschoolers were quite proud that they already do those things, as taught by their parents. However, some of the children admitted that sometimes their parents "forget" to follow at least one of the safety rules wearing a seat belt. "Remind them that you care about them and don't want them to get hurt," Larson said. "They need to be safe, too." Larson offered these safety reminders: Always wear a seatbelt while riding in a car. Always wear a helmet while riding bikes, rollerblading or skateboarding. Look both ways and listen before crossing the street. The safest place to cross a street is at a street corner. "Remember seatbelts, helmets and watch for cars," Larson said. "I hope you have a good time while you are outside playing, and be safe." Larson and Bill Dempsey helped the preschoolers who were wearing police hats made of construction paper fill out identification cards, complete with thumbprint and photo.
<urn:uuid:a5a0f4e6-4b5e-423a-8e0a-d83f20742f33>
CC-MAIN-2016-26
http://signal.baldwincity.com/news/2000/mar/01/officers_teach_rainbow/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396106.71/warc/CC-MAIN-20160624154956-00108-ip-10-164-35-72.ec2.internal.warc.gz
en
0.974268
275
2.734375
3
We will read a representative group of dramatic works by William Shakespeare, including plays from all four genres to which he contributed: comedy, tragedy, history plays, and romance. These works have become the touchstones of all that we treasure in the western literary canon, and we will pay considerable attention to the features that have made them so, but they did not function primarily as literary artifacts in their own era, nor was the popular drama considered to be an entirely respectable form of entertainment. We will consider the political and social circumstances in which the vital and unprecedented popular theater of early modern England emerged, as well as the practical components of Renaissance stagecraft. Plays likely to be on our syllabus are: A Midsummer Night’s Dream; Henry IV, Part One; Twelfth Night; Hamlet; Othello; King Lear; The Tempest. Midterm, final exam, frequent quizzes, five in-class writing assignments. practical components of Renaissance stagecraft. Student contributions will include regular attendance and participation at lecture and discussion sessions, frequent quizzes, two essays, a midterm and a final.
<urn:uuid:6a77ffd7-b14a-4f66-a741-bbb84602e831>
CC-MAIN-2016-26
http://www.lsa.umich.edu/cg/cg_detail.aspx?content=1970ENGLISH367001&termArray=w_14_1970
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395346.6/warc/CC-MAIN-20160624154955-00060-ip-10-164-35-72.ec2.internal.warc.gz
en
0.948351
224
3.3125
3
Symphony No. 9 Choral (1824) Schwarzkopf (soprano); Elisabeth Höngen (alto); Hans Hopf Otto Edelmann (bass) Bayreuth Festival Chorus and Orchestra/Wilhelm Furtwängler rec. Bayreuth, 29 July 1951. ADD NAXOS HISTORICAL Why review one of the most famous recordings of one of the most important works in the entire classical repertoire? In theory, this should be self recommending – Furtwängler is one of the great Beethoven specialists and this is a work that meant a great deal to him as an artist and as a human being. I’m also assuming that anyone reading this has at least a vague idea what the symphony sounds like and won’t need a description. This is basic repertoire after all, and needs to be listened to. Instead, I’m writing because I want to share what this music means and why this recording in particular is worth listening to. Note the date and place it was originally made. It marked the re-opening of the Bayreuth Festival in 1951. The festival had been tainted with Nazi associations because Hitler had enjoyed Wagner’s music, and Winifred Wagner had admired him. There’s plenty of serious scholarly research into this so here’s no place to pass snap judgements. Beethoven existed before the Nazis and represented a much deeper tradition. Choosing the Ninth with its theme of universal brotherhood was thus an act of hope. All the performers here, and the audience, too, would have been intimately aware of what had happened, and why the Ninth mattered. I think this accounts for the fervent intensity of the performance. Furtwängler himself had been condemned for not escaping into exile, but again, research has shown that nothing is simple black and white. Some years ago, I worked in the archives and found handwritten letters from ordinary people who’d regarded his concerts as an oasis of sanity in a mad world, music symbolising an alternative to the soulless regime. The March 1942 recording of the Ninth Symphony and the filmed concert made some weeks before capture something of the period in which they were made. The film, naturally, shows Party bigwigs, but ordinary people knew very well that Beethoven opposed dictatorships and oppression. They were also far more aware of Schiller’s libertarian philosophy than people are today. So the stony-faced Party goons sit in denial, pretending that Beethoven meant nothing and that Furtwängler was just playing “sounds”. But irony wasn’t lost on people who really understood. When this recording was made, Hitler was dead. Bayreuth was revived, but under Wieland Wagner, who knew there was more to the composer than his mother - and indeed grandmother - did. The Bayreuth Festival Orchestra may not be as precise and sophisticated as the Berlin Philharmonic, but they’re enthusiastic. I particularly like the way they play, truly molto vivace, the references to themes that will expand into the final Ode. Furtwängler lets the Adagio unfold in a leisurely way. Since this is Bayreuth, the performance reaches its pinnacle in the final movement. Very quietly, Furtwängler introduces the main theme, gradually building up towards the entry of the bass, Otto Edelmann, who’d been a prisoner of war, captured by the Russians. The pure freshness of Elisabeth Schwarzkopf’s voice soars above the ensemble, her ringing tones expressing the spiritual quality of the symphony. Furtwängler emphasises the symphony’s warmth and humanity, and its powerful sense of triumph. He was artist enough to know that music lies not in the notes but in interpretations that bring out its spirit. “Sondern lasst uns ungenehmere anstimmen und freudenvollere”, goes the text, the music to which infuses the whole symphony. The music is so universal that it’s been adopted as the European Anthem. Of course this is all anathema if music has no context and meaning. Luckily for us, Furtwängler didn’t think so – and neither from previous months Join the mailing list and receive a hyperlinked weekly update on the discs reviewed. details We welcome feedback on our reviews. Please use the Bulletin Please paste in the first line of your comments the URL of the review to which you refer.
<urn:uuid:02c7fdd5-9e79-4e08-88e5-48864621eefc>
CC-MAIN-2016-26
http://www.musicweb-international.com/classrev/2007/Feb07/Beethoven9_Furtwangler_8111060.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399106.96/warc/CC-MAIN-20160624154959-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.920369
1,051
2.6875
3
An operational amplifier (op-amp) is an integrated circuit (IC) chip that contains a high-gain differential amplilier. The inputs have very high input impedance and draw essentially no current. Schematic of an op-amp in an inverting amplifier circuit In this circuit the output voltage is governed by the following relationship: Vout = -Vin ( Rfeedback / Rin ) Science Hypermedia Home Page Copyright © 1996 by Brian M. Tissue
<urn:uuid:e9961d73-a7ba-420d-b2e7-d86cdf372564>
CC-MAIN-2016-26
http://elchem.kaist.ac.kr/vt/chem-ed/electron/devices/op-amps.htm
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398873.39/warc/CC-MAIN-20160624154958-00085-ip-10-164-35-72.ec2.internal.warc.gz
en
0.840081
98
2.984375
3
This article seeks to clarify and to eliminate confusion on the topic. Note that, although he can, Jonsson does not paint in the style of Trompe L'oeil. His style is, however, based on many years of painting very realistically. About Trompe L'oeil Trompe L'oeil, [pronounced: trome rhymes with home • ploe rhymes with go • eel rhymes with seal The accent is on the last syllable. Say it fast and run it all together.] translated from French, means "to fool, mislead, or trick the eye". For that very reason, trompe l'oeil was frequently utilized by the great artists of the Renaissance period, for example, to momentarily trick the viewer into believing that the painted objects they were looking at were real. It is a very difficult technique used by many artists throughout history. The illusionistic trompe l'oeil technique has it's roots in antiquity. Pliny the Elder's "History of Nature" was often quoted, where the ancient painter Zeuxis is praised for painting such realistic grapes that they attracted hungry birds. So impressed was fellow artist and rival Parrhasius, that in a few weeks he asked Zeuxis to come to his studio to see his painting. Zeuxis went to Parrhasius' studio and there before him was the painting draped by a curtain. Zeuxis approached the painting and when he tried to pull back the curtain to reveal the painting, he found that the curtain had been painted. So enthralled by the anticipation of a painting "behind" the curtain, Zeuxis was fooled by his rival artist. One can see how trompe l'oeil is the most naturalistic form of realism. According to the art research department at Yale University, the Trompe L’oeil technique is deeply embedded in history. During the Renaissance period, the style of painting was used to show perspective and realism. The earliest known use of Trompe L’Oeil can be seen in architectural structures of the Medieval period. Artists created their artwork in churches during this time. They would paint their works of art on the walls to give them the appearance of columns or other architectural supports. This technique was also used when there was a lack of money to actually build these types of supports as well. The architect Donato Bramante in 1444-1516 was commissioned to create an illusion of space in a church. His work was successful. He created an architectural illusion of space which visually appeared to be three or four times more than it was in reality. In another case, during this time, Federigo da Montefeltro, Duke of Urbino, commissioned the artist Baccio Pontelli to create the appearance of shelves and open cabinets. In 1680, artist Andrea Pozzo, was hired to paint in the Trompe L’Oeil style as well. He painted illusionistic scenes on the vaulted ceiling of the Saint Ignatius church in Rome. His piece is noted for a corridor that leads to the rooms that Saint Ignatius occupied in his life time. Trompe l'oeil works best when a viewer of the art is standing in one spot. This way he or she can get the full effect of the illusion. In America, trompe l'oeil had its beginnings in Philadelphia with the examples by Charles Willson Peale 1741-1827 and his eldest son Raphaelle Peale 1774-1825. Another great artist in trompe l'oeil still life (as opposed to large scale works) was William Harnett 1848-1892. One of his paintings hangs in the Boston Museum of Fine Art. You can stand no farther than two feet away and still be unable to tell whether the violin is fake or real. Really. Of course, large scale trompe l'oeil mural paintings like Jonsson's Fifth Avenue Terrace and The Sound are a bit of a different animal, yet the goal is very much the same. All images and text © 2015 Eric Jonsson, All rights reserved. U n i t e d S t a t e s E u r o p e S o u t h A m e r i c a A s i a
<urn:uuid:fcdfc1af-af7f-4b7c-a376-2875a86d382a>
CC-MAIN-2016-26
http://www.jonssonsworld.com/Brief_History_of_Trompe_Loeil.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397748.48/warc/CC-MAIN-20160624154957-00015-ip-10-164-35-72.ec2.internal.warc.gz
en
0.977831
902
3.140625
3
We aim to implement functions that can identify and find people and objects by picking out sounds and voices from the vast amount of audio and video information that is obtained via microphones and cameras and made available by ubiquitous networks. Robust media search - For media information such as video, audio and images, we are developing a technique for searching and identifying media content at high speed that is unaffected by signal distortion or noise. With this robust media search (RMS) technique, we can support the storage and distribution of information and so well handle the media information explosion. Media information extraction - We are developing a technique for automatically extracting information related to objects and events included in media information such as video, audio and images. This technique connects media information with symbolic information such as text and attributes, thereby bringing new value to vast quantities of media information. - With the world’s most advanced speech enhancement and speech recognition technology at its core, we are implementing technology that uses audio information to help us understand the surrounding environment as a communication scene. With this technology, we will support new services that help people to communicate. Dynamical information processing - By making active use of disordered phenomena such as laser chaos, we are implementing a high-speed random number generator and an information-theoretic secure encryption system. With these technologies, we will contribute to the implementation of safe and secure communications. Quantum information science - By applying the principles of quantum mechanics, it will become possible to perform diverse types of information processing that are currently regarded as impossible. Through our research into quantum information science centered on quantum computing and quantum cryptography, we aim to implement quantum telecommunication techniques in the near future. Information security based on formal methods - We are exploring the possibilities of formal methods, which are techniques for providing rigorous mathematical assurances of the security of systems. We aim to provide a high level of safety and security in telecommunication systems that provide essential services such as e-commerce, e-government and e-health.
<urn:uuid:e98e61d0-8ede-4817-ad9d-46ddb1dc055d>
CC-MAIN-2016-26
http://www.kecl.ntt.co.jp/rps/english/lab_e/media_lab_e.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397797.77/warc/CC-MAIN-20160624154957-00060-ip-10-164-35-72.ec2.internal.warc.gz
en
0.91916
408
2.53125
3
Lives of Girls Who Became Famous Margaret Fuller Ossoli Margaret Fuller, in some respects the most remarkable of American women, lived a pathetic life and died a tragic death. Without money and without beauty, she became the idol of an immense circle of friends; men and women were alike her devotees. It is the old story: that the woman of brain makes lasting conquests of hearts, while the pretty face holds its sway only for a month or a year. Margaret, born in Cambridgeport, Mass., May 23, 1810, was the oldest child of a scholarly lawyer, Mr. Timothy Fuller, and of a sweet-tempered, devoted mother. The father, with small means, had one absorbing purpose in life,--to see that each of his children was finely educated. To do this, and make ends meet, was a struggle. His daughter said, years after, in writing of him: "His love for my mother was the green spot on which he stood apart from the commonplaces of a mere bread-winning existence. She was one of those fair and flower-like natures, which sometimes spring up even beside the most dusty highways of life. Of all persons whom I have known, she had in her most of the angelic,--of that spontaneous love for every living thing, for man and beast and tree, which restores the Golden Age." Very fond of his oldest child, Margaret, the father determined that she should be as well educated as his boys. In those days there were no colleges for girls, and none where they might enter with their brothers, so that Mr. Fuller was obliged to teach his daughter after the wearing work of the day. The bright child began to read Latin at six, but was necessarily kept up late for the recitation. When a little later she was walking in her sleep, and dreaming strange dreams, he did not see that he was overtaxing both her body and brain. When the lessons had been learned, she would go into the library, and read eagerly. One Sunday afternoon, when she was eight years old, she took down Shakespeare from the shelves, opened at Romeo and Juliet, and soon became fascinated with the story. "What are you reading?" asked her father. "Shakespeare," was the answer, not lifting her eyes from the page. "That won't do--that's no book for Sunday; go put it away, and take another." Margaret did as she was bidden; but the temptation was too strong, and the book was soon in her hands again. "What is that child about, that she don't hear a word we say?" said an aunt. Seeing what she was reading, the father said, angrily, "Give me the book, and go directly There could have been a wiser and gentler way of control, but he had not learned that it is better to lead children than to drive them.
<urn:uuid:3c018792-229f-47ea-9bfb-5379231f6527>
CC-MAIN-2016-26
http://www.free-ebooks.net/ebook/Lives-of-Girls-Who-Became-Famous/html/36
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783393442.26/warc/CC-MAIN-20160624154953-00147-ip-10-164-35-72.ec2.internal.warc.gz
en
0.988456
633
2.84375
3
Bronchiectasis in Cats A cat's trachea, or wind pipe, is divided into two main bronchi, or tubes, which feed air into the lungs. The two tubes that begin the bronchial tree further divide into smaller branches, which further divide several more times to form bronchial tree. In bronchiectasis, the bronchi are irreversibly dilated due to a destruction of the elastic and muscular components in the airway walls. This may occur with or without accompanying accumulation of lung secretions. Dilatation may be associated with infections of the bronchi, pneumonia, lung damage, chronic bronchitis (inflammation), decreased functional capacity of lungs, or abnormal cell growth (neoplasia). This condition is rarely seen in the cat population, but when it does occur, it tends to affect older male cats. Symptoms and Types - Chronic cough (moist and productive) - Hemoptysis (coughing up blood) in some cats - Intermittent fever - Exercise intolerance - Rapid breathing - Difficulty in breathing normally, especially after exercise - Chronic nasal discharge - Primary ciliary dyskinesia (malfunction of the mucous clearing cilia in the lungs) - Long-standing infections - Inadequately treated infections or inflammations in the lungs - Smoke or chemical inhalation - Aspiration pneumonia (pneumonia caused by food, vomit, or other content being breathed into lungs) - Radiation exposure - Inhalation of environmental toxins followed by infections - Obstruction of bronchi due to a foreign body - Neoplasia of the lungs There are variable causes which may lead to bronchial inflammation in your cat. Therefore, a detailed history and a complete physical examination are essential for diagnosis. You will need to give your veterinarian a thorough history of your cat's health, the onset of symptoms, and possible incidents that might have led to this condition. Standard laboratory testing will include complete blood count (CBC), biochemistry profiling, and urinalysis. Blood gas analysis will indicate the functional capability of the lungs. These tests will be helpful in looking for infections or other changes related to the underlying disease. Your veterinarian will also take x-ray images of the chest, respiratory tract, and bronchial tubes, which may or may not show abnormalities in the architecture of the lungs, including dilatation of the bronchi. It is hoped that x-rays will reveal characteristic abnormalities in the bronchi that are related to this disease, but that is not always the case. Other changes in the lungs pertaining to chronic infections typically can be visualized using x-rays. Long term inflammation will leave evidence that can be visually examined. More sensitive testing, like computed tomography (CT) scanning, can be used for some patients, and this test may reveal more detailed information about structural changes within lungs. Your veterinarian will also take samples of tissue and fluid from the bronchi for laboratory evaluation. An in-depth examination of the properties of urine; used to determine the presence or absence of illness The windpipe; it carries air from the bronchi to the mouth The prediction of a disease’s outcome in advance
<urn:uuid:3902400e-691a-4771-bd52-663ba4807d37>
CC-MAIN-2016-26
http://www.petmd.com/cat/conditions/respiratory/c_ct_bronchiectasis
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397695.90/warc/CC-MAIN-20160624154957-00128-ip-10-164-35-72.ec2.internal.warc.gz
en
0.908893
666
2.96875
3
It’s not often you get to see a telescope dance, but that’s exactly what happened in the thin, dry air of Chile’s Atacama Desert on March 12. That’s when astronomy’s newest, biggest, most powerful stargazing machine was formally dedicated, after more than a year of preliminary operations. As the speeches from various political and scientific dignitaries came to a close, the Atacama Large Millimeter-submillimeter Array, or ALMA — a set of 57 radio dishes perched on the Chajnantor Plateau, some 16,600 ft. (5,060 m) above sea level — began to swivel and sway, in perfect, choreographed unison, as music filled a tent packed with scientific VIP’s. OK, maybe it was a little over the top, but ALMA’s creators, including scientists and engineers from Europe, North America, Asia and Chile had the right to make a fuss. The $1.3-billion array is a technological tour-de-force that will produce images ten times sharper than the Hubble; study galaxies from the dawn of time; tease out the secrets of solar systems as they form; and more. “Within a decade,” says Leslie Sage, Senior Editor for Physical Sciences at the journal Nature, “ALMA will have revolutionized astronomy more than the Hubble ever has.” Actually, that revolution has already begun. Even as ALMA’s dishes were performing their coming-out ballet, astronomers were announcing that during its earlier, shakedown runs, the telescope had discovered surprising numbers of so-called “starburst galaxies,” where new suns are being born at a prodigious rate, just a billion years after the Big Bang — which is a billion years earlier than anyone had expected. Last year, a team of observers used the array to detect the presence of unseen planets orbiting the star Fomalhaut, inferring the existence of the worlds by their effects on a ring of dust. “These first results are spectacular,” says Pierre Cox, ALMA’s incoming director, “and they were done with a limited number of antennas”—in the case of the planets, with just 15 of what will ultimately be 66 dishes, working in concert. That’s one big reason the new telescope is so powerful: by combining the signals from all those dishes, ALMA can simulate a single dish as much as 10 miles (16 km) across. That makes ALMA’s images preternaturally sharp. The powerful detectors at the heart of each dish, meanwhile, cooled to within a few degrees above absolute zero (-460º F, or -273º C) can sense the ping of incoming electromagnetic radiation with unprecedented sensitivity. In this case, the radiation in question isn’t ordinary visible light, but rather a form of light that lies in between the infrared and the microwave parts of the spectrum. Some astronomical phenomena, like the rings of cool dust that eventually turn into planets, naturally glow brightest in the millimeter-submillimeter part of the spectrum. Others, such as distant galaxies, start off with a smaller wavelength but their emissions are then stretched into the millimeter-submillimeter region as they cross a universe that’s constantly expanding. Unfortunately, this sort of radiation can’t penetrate Earth’s atmosphere very easily, and launching huge radio dishes into space isn’t very practical. So ALMA’s partner institutions — the U.S. National Radio Astronomy Observatory, the National Astronomical Observatory of Japan and the European Southern Observatory — decided to get as close to space as possible. Chile’s Chajnantor plateau is ideal: about half of Earth’s atmosphere lies below it, and the skies above are extraordinarily dry, with little water vapor to distort ALMA’s view. The downside is that construction workers, engineers and astronomers have to spend their days in a place where altitude sickness is a real concern. Indeed, visitors who come to the so-called “high site,” where the antennas actually sit, are handed disposable oxygen bottles for a quick puff if things start to go hazy. The operations center, where the antennas are assembled and where the inauguration took place, is at a still-lofty but more manageable 9,000 ft. (2,740 m) or so. By the end of 2013, all 66 of ALMA’s dishes should be installed and fully operational, and the world’s most powerful astronomical instrument will be firmly on the way to making a series of mind-expanding discoveries, including…well, nobody can really say. “We didn’t know what we were going to find with Hubble,” says Ethan Schreier, President of Associated Universities, Inc., which oversees the National Radio Astronomy Observatory, “and most of what we found, we couldn’t have predicted.” If the same goes for ALMA — and there’s no reason it shouldn’t — it will prove yet again that the British biologist J.B.S. Haldane was dead on when he said, “The Universe is not only queerer than we suppose, but queerer than we can suppose.”
<urn:uuid:f072285f-d4d3-46ab-b4c6-a58c163ea308>
CC-MAIN-2016-26
http://science.time.com/2013/03/20/a-super-telescope-goes-to-work/print/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783398628.62/warc/CC-MAIN-20160624154958-00174-ip-10-164-35-72.ec2.internal.warc.gz
en
0.944148
1,111
3.46875
3
Welcome to Fumane Cave official website A few miles north of the town of Fumane (Verona), on the old road to Molina in the Val dei Progni, in the 1960s archaeologist G. Solinas discovered what is now called Riparo Solinas (Solinas rock shelter) or simply Grotta di Fumane (Fumane cave), one of the most highly regarded monuments of ancient prehistory. This site is extremely important for understanding the significant biological and cultural change in human evolution which occurred around 40,000 years ago. Grotta di Fumane is one of the major prehistoric archaeological sites in Europe. The rich evidence preserved in the deposits filling the cave has been studied since 1988 by the Regional Authority (Soprintendenza del Veneto) for Archaeological Heritage, by the University of Ferrara, the University of Milan and the Natural History Museum of Verona and is an exceptional document of the lifestyles of both Neanderthal man and early Modern humans. This site is essential for studying the way of life, the economy, technology and spirituality of the ancient humans that frequented the Valpolicella area for over 50,000 years, and also for our understanding of the mechanisms that led, around 40,000 years ago, to the affirmation of Modern Man in Europe. Since 2005 the cave has been accessible to visitors of the Lessinia Park. The traces of Palaeolithic living spaces revealed throughout the stratigraphic sections are an evocative journey through the past.
<urn:uuid:99f1eb7e-3541-485d-ac26-94ab39b921b2>
CC-MAIN-2016-26
http://grottadifumane.eu/en/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403825.35/warc/CC-MAIN-20160624155003-00192-ip-10-164-35-72.ec2.internal.warc.gz
en
0.907448
309
2.78125
3
How to Use Creation and Annihilation Operators to Solve Harmonic Oscillator Problems Creation and annihilation may sound like big make-or-break-the-universe kinds of ideas, but they play a starring role in the quantum world when you're working with harmonic oscillators. You use the creation and annihilation operators to solve harmonic oscillator problems because doing so is a clever way of handling the tougher Hamiltonian equation. Here's what these two operators do: Creation operator. The creation operator raises the energy level of an eigenstate by one level, so if the harmonic oscillator is in the fourth energy level, the creation operator raises it to the fifth level. Annihilation operator. The annihilation operator does the reverse, lowering eigenstates one level. These operators make it easier to solve for the energy spectrum without a lot of work solving for the actual eigenstates. In other words, you can understand the whole energy spectrum by looking at the energy difference between eigenstates. Here's how people usually solve for the energy spectrum. First, you introduce two new operators, p and q, which are dimensionless; they relate to the P (momentum) operator this way: You use these two new operators, p and q, as the basis of the annihilation operator, a, and the creation operator, Now you can write the harmonic oscillator Hamiltonian like this, in terms of As for creating new operators here, the quantum physicists went crazy, even giving a name to So here's how you can write the Hamiltonian: The N operator returns the number of the energy level of the harmonic oscillator. If you denote the eigenstates of N as you get this, where n is the number of the nth state: then by comparing the previous two equations, you have Amazingly, that gives you the energy eigenvalues of the nth state of a quantum mechanical harmonic oscillator. So here are the energy states: The ground state energy corresponds to n = 0: The first excited state is The second excited state has an energy of And so on. That is, the energy levels are discrete and nondegenerate (not shared by any two states). Thus, the energy spectrum is made up of equidistant bands.
<urn:uuid:0a5e3f53-e518-44c2-9b47-f171053077ac>
CC-MAIN-2016-26
http://www.dummies.com/how-to/content/how-to-use-creation-and-annihilation-operators-to-.navId-817347.html
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783392159.3/warc/CC-MAIN-20160624154952-00055-ip-10-164-35-72.ec2.internal.warc.gz
en
0.904587
476
3.375
3
Oneiry, in sixth grade and 11 years old, liked the tie-dye experiment, where learning about the light and color also resulted in cool take-home T-shirts. Genesis, a nine-year-old fourth grader, really enjoyed the liquid nitrogen demonstration, especially the ice cream she got to make with it. And Julia, at 10 in fifth grade, had a good time making “gak,” a substance that’s not quite solid and not quite liquid – and slimy and fun. They were among 10 Middletown girls between fourth and sixth grade who participated in a girls’ science camp sponsored by the Green Street Arts Center Aug. 4-8. The session, staffed by Wesleyan faculty, was designed to introduce girls to the “STEM” fields – Science, Technology, Engineering and Math. Women are underrepresented in these fields, and educators believe it’s important to engage girls in them as early as possible. “There are significantly fewer women and people of color in STEM careers, and we wanted to create an opportunity to inspire young women to think about science as an option for themselves,” said Green Street Director Sara MacSorley. “I think getting girls excited about science is easy because science is inherently cool; the tougher part can be showing young people that science is a viable career for women.” The week’s packed schedule included: a fruit fly experiment with Assistant Professor of Biology Ruth Johnson; a laser and prism demo by Assistant Professor of Physics Christina Othon; and a “Grow Your Own Germs” bacteria class guided by Erika Taylor, assistant professor of chemistry, assistant professor of environmental studies. The teachers matched hands-on activities with explanations of basic scientific concepts, and the students practiced scientific methods as they kept lab notebooks, making observations and drawing conclusions from their experiments. Guest speakers included a female engineer who talked about her job. On Aug. 7, the girls spent the day at the Exley Science Center, touring the university labs and later studying the eyes of flies to better understand how humans perceive light. They were able to look at specimens under high-powered microscopes – the same ones used by Wesleyan’s biology students. “I really value being able to work with girls of this age group, because it is the time when kids start to envision their futures,” said Taylor. “And I think it is so important for these girls to see role models they can identify with, so that they know anyone can do science.” Johnson, who wowed the campers with the fly-eye demonstration, said: “These 10 young ladies approached the challenges we gave them with a boldness and excitement that was inspiring. This is the type of boldness that young people have before any seeds of self-doubt have begun to germinate. You know, that can-I-really-do-this? doubt.” And Othon was impressed with the way the campers embraced the scientific method. “They took ownership of their notebooks, and made marvelous observations and detailed descriptions of their work.” The science camp was supported with a generous grant from the Petit Family Foundation.
<urn:uuid:20cf5061-bf91-43eb-97f8-ad04d8713498>
CC-MAIN-2016-26
http://newsletter.blogs.wesleyan.edu/2014/08/14/stem/?quad=LR&dow=Thu
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783399522.99/warc/CC-MAIN-20160624154959-00160-ip-10-164-35-72.ec2.internal.warc.gz
en
0.970821
674
3
3
Smart Body Area Networks Body Area Network – BAN – technology is the use of small, low power wireless devices which can be carried or embedded inside or on the body. - health and wellness monitoring - sports training (e.g. to measure performance) - personalized medicine (e.g. heart monitors) - personal safety (e.g. fall detection) A number of wireless BAN communication technologies have been implemented, based on existing radio technologies. But, if BAN technology is to achieve its full potential, there is need for a more specific and dedicated technology, optimized for BAN. For example, solutions for monitoring people during exercise one or two hours a day, a few days a week, may not be suitable for 24/7 monitoring as part of the Internet of Things (IoT). Such a dedicated BAN technology would need features such as: - ultra-low power radio, with a lower complexity Medium Access Control (MAC) protocol for extended autonomy - enhanced robustness in the presence of interference - interoperability when communicating over heterogeneous networks in the future IoT Our SmartBAN committee (TC SmartBAN) is developing standards for a dedicated BAN radio technology. - the low complexity Medium Access Control (MAC) and routeing requirements for SmartBANs - an ultra-low power Physical Layer for on-body communications between a hub and sensor nodes - interoperability over heterogeneous networks - a system description, including an overview and use cases The following is a list of the 20 latest published ETSI standards on smart body area networks. A full list of related standards in the public domain is accessible via the ETSI standards search. Via this interface you can also subscribe for alerts on updates of ETSI standards. For work in progress see the ETSI Work Programme on the Portal. |Standard No.||Standard title.| |TS 103 378||Smart Body Area Networks (SmartBAN) Unified data representation formats, semantic and open data model| |TS 103 325||Smart Body Area Network (SmartBAN); Low Complexity Medium Access Control (MAC) for SmartBAN| |TS 103 326||Smart Body Area Network (SmartBan); Enhanced Ultra-Low Power Physical Layer|
<urn:uuid:fdd298f6-60b4-4b4b-bcb2-2e205fb2e318>
CC-MAIN-2016-26
http://www.etsi.org/technologies-clusters/technologies/smart-body-area-networks
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783403826.29/warc/CC-MAIN-20160624155003-00082-ip-10-164-35-72.ec2.internal.warc.gz
en
0.852128
473
3.140625
3
Therapeutic Horse Riding Choices photos by Yvette Janvier Clarice Gualtieri of Chalfont, PA, during a riding lesson at Special Equestrians in Warrington, PA. Leading is volunteer Angela DiPasquale. They are trotting, which stimulates large muscle groups and helps Clarice focus. On any Saturday afternoon at centers around the Delaware Valley and the nation, children with disabilities are learning to ride a horse. Therapeutic horseback riding is an enjoyable, educational activity that develops skills and builds friendships. Therapeutic riding programs vary in their programs, riding styles and even the size of their horses. Riders can be found in an indoor ring or out on the trail. Some children take lessons alone, others in pairs or larger groups. In most area programs, an instructor closely accompanies beginners and at least one volunteer “sidewalker” walks or jogs alongside the horse and supports the rider. More experienced students can trot and canter on their own. Programs also range in price, from free of charge to $50 or more per lesson. At most riding centers, time in the saddle is just the beginning. Students participate in a variety of equine activities, such as grooming, feeding,attaching the lead rope and walking the horse. In addition to weekly instruction, many stables offer special events, as well as specialty and summer camp programs. Here are examples of the varied horse therapy activities offered by area riding centers. A Horse for Every Rider In addition to therapeutic horseback riding, Special Equestrians in Warrington, PA, offers hippotherapy, a treatment that uses activities on the horse to help children develop functional skills. With sessions led by an occupational or physical therapist, hippotherapy is well-suited for younger riders and children with significant needs. These and other equine activities run throughout the week at this state-of the-art facility. Among the innovative programs at Special Equestrians is REINS (Riders Excelling In New Skills), “a small, multi-sensory group program for children with Autism Spectrum Disorders (ASDs). The curriculum includes both mounted and unmounted components,” says program director Anne Reynolds. Participants ride, but they also practice handwriting and play communication games. Occupational therapists and riding instructors work together with individuals and in group sessions. At the Kaleidoscope Therapeutic Riding Program in Mount Laurel, NJ, founder Kelly Adams uses kid-friendly activities to help riders explore their potential. Theme days, such as Mardi Gras and Welcome to Spring, come complete with costumes. Horsey games, such as relay races with giant spoons and tic-tac-toe with bean bags, add to the fun. These themes and activities are the therapy. They create a warm environment and provide a fun, non-competitive way for children to interact with and learn from each other. According to Adams, “In this playful kind of setting, everyone excels.” While things can get silly, learning to ride is the goal. Usually, two students, preferably at different levels, take 30-minute lessons together. CHASE (Challenged Horsemen and Small Equines) Center founder Sherry Bohl offers therapeutic riding on miniature horses (34 to 38 inches tall) at DREAM Park, a recreation complex in Logan Township, NJ. “Lower to the ground and not as intimidating, smaller horses are perfect for younger children,” says Bohl. CHASE welcomes riders as young as 2 to 3 years of age. With the smallest students in mind, the CHASE Center sports a tiny saddle and three miniature mounts, Diva, Diamond, and Miss Special. Currently, lessons are offered one day a week. Therapeutic horseback riding is not just about the riders. The generosity of volunteers, usually teenagers from the local community, sustains day-to-day operations at many centers. From cleaning stalls to playing with siblings to walking alongside the horse and rider during lessons, volunteers make a crucial difference. Although it is not for everyone, volunteer work at riding centers can be amazing and transformative. A phone call to your teen’s center of choice is enough to start the volunteer process. Research on Riding Programs have different ways to collect data on students and closely monitor their progress. One center, Quest Therapeutic Services in West Chester, PA, has a unique equine research project. Through a grant from Autism Speaks, the nation’s largest autism advocacy organization, professionals at Quest will launch, run, and evaluate an Equestrian Therapeutic Interactive Vaulting program for children with autism spectrum disorders. Both an art and a sport, vaulting is the performance of different movements on the back of a horse. The benefits associated with this novel physical and educational experience include increased self-confidence and improved social and motor skills for children. Sharon A. Hollander, PsyD works with children with autism spectrum disorders at Children’s Specialized Hospital in Toms River, NJ. |[return to top of article]| Therapeutic Riding Centers Carousel Farm Riding Stable / Carousel Park & Equestrian Center The Center for Therapeutic & CHASE: Challenged Horsemen Compassionate Friends TRC Labrador Hill Farm & Studio Riding High Farm All Riders Up Flying High Equestrian Therapy, Inc. Golden Pony Ranch, Inc. Heaven's Gate Farm, LLC Ivy Hill Therapeutic Equestrian Center Pegasus Riding Academy Quest Therapeutic Services, Inc. Rainbow Ridge Farm Sebastian Riding Associates Special Equestrians, Inc. Thorncroft Therapeutic Horsebackriding, Inc. Worcester Stables at Our Farm
<urn:uuid:e1b88224-894b-4a2f-93ed-ae8cc0ab13bb>
CC-MAIN-2016-26
http://www.metrokids.com/MetroKids/May-2011/Therapeutic-Horse-Riding-Choices/
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783397795.31/warc/CC-MAIN-20160624154957-00140-ip-10-164-35-72.ec2.internal.warc.gz
en
0.920962
1,212
2.53125
3