{"website": "https://en.wikipedia.org/wiki/One_Rank,_One_Pension", "document": "One Rank One Pension (OROP), or \"same pension, for same rank, for same length of service, irrespective of the date of retirement\", is a longstanding demand of the Indian armed forces and veterans.\u200a The demand for pay-pension equity, which underlies the OROP concept, was provoked by the exparte decision by the Indira Gandhi-led Indian National Congress (INC) government, in 1973, two years after the historic victory in the 1971 Bangladesh war.\n\nIn 1986, the sense of unease and distrust prompted by the Third Central Pay Commission (CPC) was exacerbated by the Rajiv Gandhi led Indian National Congress (I) Government's decision to implement Rank Pay, which reduced basic pay of captain, majors, lt-colonel, colonels, and brigadiers, and their equivalent in the air-force, and the navy, relative to basic pay scales of civilian and police officers. The decision to reduce the basic pay of these ranks, implemented without consulting the armed forces, created radically asymmetries between police-military ranks, affected the pay, and pension of tens of thousands of officers and veterans, spawned two decades of contentious litigation by veterans. It became a lingering cause of distrust between the armed forces veterans and the MOD, which the government did little to ameliorate.\n\nIn 2008, the Manmohan Singh led United Progressive Alliance (UPA) Government in the wake of the Sixth Central Pay Commission (6CPC), discarded the concept of rank-pay. Instead it introduced Grade pay, and Pay bands, which instead of addressing the rank, pay, and pension asymmetries caused by 'rank pay' dispensation, reinforced existing asymmetries. The debasing of armed forces ranks was accompanied by decision in 2008 to create hundreds of new posts of secretaries, special Secretaries, director general of police (DGP) at the apex grade pay level to ensure that all civilian and police officers, including defence civilian officers, retire at the highest pay grade with the apex pay grade pensions with One Rank One Pay (OROP).\n\nBetween 2008\u201314, during the tenure of the UPA Government led by Prime Minister Manmohan Singh, myriad Armed Forces grievances prompted by perceived inequities subsumed with OROP issue to make OROP a potent rallying call that resonated with veterans of all ranks. Against the background of perceived discrimination, and slights, and dismissive response of the Government, armed forces veterans, in the later half 2008, started a campaign, of nationwide public protests, which included hunger strikes. In response to the OROP protests, which underscored the growing pay-pension-status asymmetries, the UPA Government, in 2011, appointed a parliamentary committee which found merit in the veterans demands for OROP.\n\nThe causes that inform the OROP protest movement are not pension alone, as armed forces veterans have often tried to make clear, and the parliamentary committee recorded. The issues, veterans emphasize, are of justice, equity, honor, and national security.[citation needed] The failure to address issue of pay-pension equity, and the underlying issue of honor, is not only an important cause for the OROP protest movement, but its escalation. The causes and grievances that inform OROP protesters and their high ranking supporters, in addition to failure of the government to implement OROP, are a string of contentious decision taken by UPA Government, in 2008\u20139, in the wake of Sixth Central Pay Commission (6 CPC), that sharply degraded Armed Forces pay grades and ranks. Decisions, that have had a radical impact on the armed forces sense of self-esteem, honor, and their trust in the government and security bureaucracy, some of which come to dominate policy under the UPA government, and remain unaddressed by the BJP Government, are outlined in the succeeding paragraphs.", "doc_id": "8e778252-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Washington_University_in_St._Louis", "document": "Washington University in St. Louis (WashU, or WUSTL) is a private research university with its main campus in St. Louis County, and Clayton, Missouri. Founded in 1853 and named after George Washington, it is ranked among the most prestigious universities in the United States and in the world by major institutional publications.\n\nThe university's 169-acre Danforth Campus is at the center of Washington University and is the academic home to the majority of the university\u2019s undergraduate, graduate, and professional students. The Danforth Campus features predominantly Collegiate Gothic architecture in its academic buildings and is bordered by Forest Park and the cities of St. Louis, Clayton and University City. The university also has a West Campus in Clayton, North Campus in the West End neighborhood of St. Louis, and Medical Campus in the Central West End neighborhood of St. Louis. The Washington University Medical Campus spreads over 17 city blocks and 164 acres. The center is home to the Washington University School of Medicine in St. Louis and its affiliated hospitals, clinics, patient care centers and research facilities.\n\nIt has students and faculty from all 50 U.S. states and more than 120 countries. Washington University is composed of seven graduate and undergraduate schools that encompass a broad range of academic fields. To prevent confusion over its location, the university's board of trustees added the phrase \"in St. Louis\" in 1976.\n\nWashington University has been a member of the Association of American Universities since 1923 and is classified among \"R1: Doctoral Universities \u2013 Very high research activity\". The National Science Foundation ranked the university 28th among academic institutions in the United States for research and development (R&D) expenditures. As of 2020, 25 Nobel laureates in economics, physiology and medicine, chemistry, and physics have been affiliated with Washington University, ten having done the major part of their pioneering research at the university.\n\nArchitecture offers BS and BA degrees at the undergraduate level, as well as the Master of Architecture, Master of Landscape Architecture, Master of Urban Design, MS in Advanced Architectural Design, and MS in Architectural Studies. There are also joint degree programs at the graduate level in conjunction with other divisions of the university. Art offers BFA and BA degrees at the undergraduate level, as well as the MFA in Visual Art and MFA in Illustration & Visual Culture.\n\nArt offers BFA and BA degrees at the undergraduate level, as well as the MFA in Visual Art and MFA in Illustration & Visual Culture.\n\nIn October 2006 the Mildred Lane Kemper Art Museum moved into new facilities designed by Pritzker Prize-winning architect, and former faculty member, Fumihiko Maki. The art museum was first established in 1881 and was the first art museum west of the Mississippi River. It houses most of the University's art and sculpture collections, including pieces by Jackson Pollock, Robert Rauschenberg, Jenny Holzer, Pablo Picasso, Max Ernst, Willem de Kooning, Henri Matisse, Joan Mir\u00f3, and Rembrandt van Rijn, among others.\n\nCarmon Colangelo is the Ralph J. Nagel Dean of the Sam Fox School of Design & Visual Arts. Heather Woofter is director of the College of Architecture and the Graduate School of Architecture & Urban Design. Amy Hauft is the director of the College of Art and Graduate School of Art.\n\nThe McKelvey School of Engineering at Washington University in St. Louis (WashU Engineering) is a school with 88 tenured and tenure-track professors, 40 additional full-time faculty, 1,300 undergraduate students, 560 master's students, 380 PhD students, and more than 20,000 alumni. Aaron Bobick serves as dean of the school.\n\nWith approximately $27 million in annual research awards, the school focuses intellectual efforts on medicine and health, energy and environment, entrepreneurship, and security. The school is ranked among the top 50 by the magazine U.S. News & World Report, and the biomedical engineering graduate program was ranked 12th by U.S. News & World Report in 2012\u20132013.\n\nOn January 31, 2019, the School of Engineering & Applied Science was renamed the James McKelvey School of Engineering, in honor of trustee and distinguished alumnus Jim McKelvey Jr., the co-founder of Square, after his donation of an undisclosed sum that the school's dean, Aaron Bobick, said has been the largest in the school's 162-year history.", "doc_id": "8e7784fa-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Missoula,_Montana", "document": "Missoula is a city in the U.S. state of Montana; it is the county seat of Missoula County. It is located along the Clark Fork River near its confluence with the Bitterroot and Blackfoot Rivers in western Montana and at the convergence of five mountain ranges, thus it is often described as the \"hub of five valleys\". The 2020 United States Census shows the city's population at 73,489 and the population of the Missoula Metropolitan Area at 117,922. After Billings, Missoula is the second-largest city and metropolitan area in Montana. Missoula is home to the University of Montana, a public research university.\n\nThe Missoula area began seeing settlement by people of European descent in 1858 including William T. Hamilton, who set up a trading post along the Rattlesnake Creek, Captain Richard Grant, who settled near Grant Creek, and David Pattee, who settled near Pattee Canyon. Missoula was founded in 1860 as Hellgate Trading Post while still part of Washington Territory. By 1866, the settlement had moved east, 5 miles (8 km) upstream, and had been renamed Missoula Mills, later shortened to Missoula. The mills provided supplies to western settlers traveling along the Mullan Road. The establishment of Fort Missoula in 1877 to protect settlers further stabilized the economy. The arrival of the Northern Pacific Railway in 1883 brought rapid growth and the maturation of the local lumber industry. In 1893, the Montana Legislature chose Missoula as the site for the state's first university. Along with the U.S. Forest Service headquarters founded in 1908, lumber and the university remained the basis of the local economy for the next 100 years.\n\nBy the 1990s, Missoula's lumber industry had gradually disappeared, and as of 2009, the city's largest employers were the University of Montana, Missoula County Public Schools, and Missoula's two hospitals. The city is governed by a mayor\u2013council government with 12 city council members, two from each of the six wards. In and around Missoula are 400 acres (160 ha) of parkland, 22 miles (35 km) of trails, and nearly 5,000 acres (2,000 ha) of open-space conservation land, with adjacent Mount Jumbo being home to grazing elk and mule deer during the winter.[13] The city is also home to both of Montana's largest and its oldest active breweries, as well as the Montana Grizzlies. Notable residents include the first woman to serve in the U.S. Congress, Jeannette Rankin.\n\nMissoula is located at the western edge of Montana, approximately 45 miles (70 km) from the Idaho border. The city is at an elevation of 3,209 feet (978 m) above sea level, with nearby Mount Sentinel and Mount Jumbo steeply rising to 5,158 feet (1,572 m) and 4,768 feet (1,453 m), respectively. According to the Census Bureau's 2015 figures, the city had a total area of 29.08 square miles, of which 28.90 square miles were land and 0.184 square miles were covered by water.\n\nAround 13,000 years ago, the entire valley was at the bottom of Glacial Lake Missoula. As could be expected for a former lake bottom, the layout of Missoula is relatively flat and surrounded by steep hills. Evidence of the city of Missoula's lake-bottom past can be seen in the form of ancient horizontal wave-cut shorelines on nearby Mount Sentinel and Mount Jumbo. At the location of present-day University of Montana, the lake once had a depth of 950 feet (290 m). The Clark Fork River enters the Missoula Valley from the east through Hellgate Canyon after joining the nearby Blackfoot River at the site of the former Milltown Dam. The Bitterroot River and multiple smaller tributaries join the Clark Fork on the western edge of Missoula. The city also sits at the convergence of five mountain ranges: the Bitterroot Mountains, Sapphire Range, Garnet Range, Rattlesnake Mountains, and the Reservation Divide, thus is often described as being the \"hub of five valleys\".\n\nLocated in the Northern Rockies, Missoula has a typical Rocky Mountain ecology. Local wildlife includes populations of white-tailed deer, moose, grizzly bears, black bears, osprey, and bald eagles. During the winter, rapid snowmelt on Mount Jumbo due to its steep slope leaves grass available for grazing elk and mule deer. The rivers around Missoula provide nesting habitats for bank swallows, northern rough-winged swallows, and belted kingfishers. Killdeer and spotted sandpipers can be seen foraging for insects along the gravel bars. Other species include song sparrows, catbirds, several species of warblers, and the pileated woodpecker. The rivers also provide cold, clean water for native fish such as westslope cutthroat trout and bull trout. The meandering streams also attract beaver and wood ducks. The parks also host a variety of snakes such as racers, garter snakes, and rubber boa.\n\nNative riparian plant life includes sandbar willows and cottonwoods with Montana's state tree, the ponderosa pine, also being prevalent. Other native plants include wetland species such as cattails and beaked sedge, as well as shrubs and berry plants such as Douglas hawthorn, chokecherry, and western snowberries. To the chagrin of local farmers, Missoula is also home to several noxious weeds, which multiple programs have set out to eliminate. Notable ones include Dalmatian toadflax, spotted knapweed, leafy spurge, St. John's wort, and sulfur cinquefoil. Controversially, the Norway maples that line many of Missoula's older streets have also been declared an invasive species.", "doc_id": "8e778662-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Hafez_al-Assad", "document": "Hafez al-Assad was a Syrian politician and military officer who served as President of Syria from 1971 to 2000. He was also Prime Minister of Syria from 1970 to 1971, as well as regional secretary of the regional command of the Syrian regional branch of the Arab Socialist Ba'ath Party and secretary general of the National Command of the Ba'ath Party from 1970 to 2000. Assad participated in the 1963 Syrian coup d'\u00e9tat which brought the Syrian regional branch of the Arab Socialist Ba'ath Party to power, and the new leadership appointed him commander of the Syrian Air Force. In February 1966, Assad participated in a second coup, which toppled the traditional leaders of the Ba'ath Party. Assad was appointed defence minister by the new government. Four years later, Assad initiated a third coup which ousted the de facto leader Salah Jadid and appointed himself as leader of Syria.\n\nAssad imposed change on the Ba'ath government when he took power, by imposing capitalism and further pushing the agenda of private property and by strengthening the country's foreign relations with countries which his predecessor had deemed reactionary. He sided with communism and the Soviet Union during the Cold War in return for support against Israel, and, while he had forsaken the pan-Arab concept of unifying the Arab world into one Arab nation, he sought to make Syria the defender of Palestine and Arab interests against Israel. When he came to power, Assad organised state services along sectarian lines (the Sunnis became the heads of political institutions, while the Alawites took control of the military, intelligence, and security apparatuses). The formerly collegial powers of Ba'athist decision-making were curtailed, and were transferred to the Syrian presidency. The Syrian government ceased to be a one-party system in the normal sense of the word, and was turned into a one-party dictatorship with a strong presidency. To maintain this system, a cult of personality centred on Assad and his family was created by the president and the Ba'ath party.\n\nHaving become the main source of initiative inside the Syrian government, Assad began looking for a successor. His first choice was his brother Rifaat, but Rifaat attempted to seize power in 1983\u201384 when Hafez's health was in doubt. Rifaat was subsequently exiled when Hafez's health recovered. Hafez's next choice of successor was his eldest son, Bassel. However, Bassel died in a car accident in 1994, and Hafez turned to his third choice\u2014his younger son Bashar, who at that time had no political experience. The move to appoint a member of his own family as his successor was met with criticism within some quarters of the Syrian ruling class, but Assad persisted with his plan and demoted officials who opposed this succession. Hafez died in 2000 and Bashar succeeded him as president.\n\nIn the aftermath of the 1963 coup, at the First Regional Congress (held 5 September 1963) Assad was elected to the Syrian Regional Command (the highest decision-making body in the Syrian Regional Branch).[45] While not a leadership role, it was Assad's first appearance in national politics;[45] in retrospect, he said he positioned himself \"on the left\" in the Regional Command.[45] Khalid al-Falhum, a Palestinian who would later work for the Palestine Liberation Organization (PLO), met Assad in 1963; he noted that Assad was a strong leftist \"but was clearly not a communist\", committed instead to Arab nationalism.[46]\n\nDuring the 1964 Hama riot, Assad voted to suppress the uprising violently if needed. The decision to suppress the Hama riot led to a schism in the Military Committee between Umran and Jadid. Umran opposed force, instead wanting the Ba'ath Party to create a coalition with other pan-Arab forces. Jadid desired a strong one-party state, similar to those in the communist countries of Europe. Assad, as a junior partner, kept quiet at first but eventually allied himself with Jadid. Why Assad chose to side with him has been widely discussed; he probably shared Jadid's radical ideological outlook. Having lost his footing on the Military Committee, Umran aligned himself with Aflaq and the National Command; he told them that the Military Committee was planning to seize power in the party by ousting them. Because of Umran's defection, Rifaat al-Assad (Assad's brother) succeeded Umran as commander of a secret military force tasked with protecting Military Committee loyalists.\n\nIn its bid to seize power the Military Committee allied themselves with the regionalists, a group of cells in the Syrian Regional Branch that refused to disband in 1958 when ordered to do so. Although Aflaq considered these cells traitors, Assad called them the \"true cells of the party\"; this again highlighted differences between the Military Committee and the National Command headed by Aflaq. At the Eighth National Congress in 1965 Assad was elected to the National Command, the party's highest decision-making body. From his position as part of the National Command, Assad informed Jadid on its activities. After the congress, the National Command dissolved the Syrian Regional Command; Aflaq proposed Salah al-Din al-Bitar as prime minister, but Assad and Brahim Makhous opposed Bitar's nomination. According to Seale, Assad abhorred Aflaq; he considered him an autocrat and a rightist, accusing him of \"ditching\" the party by ordering the dissolution of the Syrian Regional Branch in 1958. Assad, who also disliked Aflaq's supporters, nevertheless opposed a show of force against the Aflaqites. In response to the imminent coup Assad, Naji Jamil, Husayn Mulhim and Yusuf Sayigh left for London.\n\nIn the 1966 Syrian coup d'\u00e9tat, the Military Committee overthrew the National Command. The coup led to a permanent schism in the Ba'ath movement, the advent of neo-Ba'athism and the establishment of two centers of the international Ba'athist movement: one Iraqi- and the other Syrian-dominated.\n\nThe Arab defeat in the Six-Day War, in which Israel captured the Golan Heights from Syria, provoked a furious quarrel among Syria's leadership.[62] The civilian leadership blamed military incompetence, and the military responded by criticizing the civilian leadership (led by Jadid).[62] Several high-ranking party members demanded Assad's resignation, and an attempt was made to vote him out of the Regional Command, the party's highest decision-making body.[62] The motion was defeated by one vote, with Abd al-Karim al-Jundi (who the anti-Assad members hoped would succeed Assad as defense minister) voting, as Patrick Seale put it, \"in a comradely gesture\" to retain him.[62] During the end of the war, the party leadership freed Aflaqites Umran, Amin al-Hafiz and Mansur al-Atrash from prison.[62] Shortly after his release, Hafez was approached by dissident Syrian military officers to oust the government; he refused, believing that a coup at that time would have helped Israel, but not Syria.[62]\n\nThe war was a turning point for Assad (and Ba'athist Syria in general), and his attempted ouster began a power struggle with Jadid for control of the country. Until then Assad had not shown ambition for high office, arousing little suspicion in others. From the 1963 Syrian coup d'\u00e9tat to the Six-Day War in 1967, Assad did not play a leading role in politics and was usually overshadowed by his contemporaries. As Patrick Seale wrote, he was \"apparently content to be a solid member of the team without the aspiration to become number one\". Although Jadid was slow to see Assad's threat, shortly after the war Assad began developing a network in the military and promoted friends and close relatives to high positions.", "doc_id": "8e7787f2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Carl_Friedrich_Goerdeler", "document": "Carl Friedrich Goerdeler was a monarchist conservative German politician, executive, economist, civil servant and opponent of the Nazi regime. He opposed some anti-Jewish policies while he held office and was opposed to the Holocaust.\n\nHad the 20 July plot to overthrow Hitler's dictatorship in 1944 succeeded, Goerdeler would have served as the Chancellor of the new government. After his arrest, he gave the names of numerous co-conspirators to the Gestapo, causing the arrests and executions of hundreds or even thousands of others. Goerdeler was executed by hanging on 2 February 1945.\n\nGoerdeler was born into a family of Prussian civil servants in Schneidem\u00fchl in the Prussian Province of Posen of the German Empire (now Pi\u0142a in present-day Poland). Goerdeler's parents supported the Free Conservative Party, and after 1899 Goerdeler's father served in the Prussian Landtag as a member of that party. Goerdeler's biographer and friend Gerhard Ritter described his upbringing as one of a large, loving middle-class family that was cultured, devoutly Lutheran, nationalist and conservative. As a young man, the deeply religious Goerdeler chose as his motto to live by omnia restaurare in Christo (to restore everything in Christ). From 1902 to 1905 Goerdeler studied economics and law at the University of T\u00fcbingen. From 1911 he worked as a civil servant for the municipal government of Solingen in the Prussian Rhine Province. The same year, Goerdeler married Anneliese Ulrich, by whom he would have five children.\n\nDuring the First World War, Goerdeler served as a junior officer on the Eastern Front, rising to the rank of captain. From February 1918 he worked as part of the German military government in Minsk. After the war ended, Goerdeler served on the headquarters of the XVII Army Corps based in Danzig (now Gda\u0144sk in Poland). In June 1919, Goerdeler submitted a memorandum to his superior, General Otto von Below, calling for the destruction of Poland as the only way to prevent territorial losses on Germany's eastern borders.\n\nAfter his discharge from the German Army, Goerdeler joined the ultraconservative German National People's Party (DNVP). Like most other Germans, Goerdeler strongly opposed the Versailles Treaty of 1919, which forced Germany to cede territories to the restored Polish state. In 1919, before the exact boundaries of the Polish-German border were determined, he suggested restoring West Prussia to Germany. Despite his strong hostile feelings towards Poland, Goerdeler played a key role during the 1920 Polish\u2013Soviet War in breaking a strike by Danzig dockers, who wished to shut down Poland's economy by closing its principal port. He thought that Poland was a less undesirable neighbour than Bolshevik Russia.\n\nIn 1922, Goerdeler was elected as mayor (B\u00fcrgermeister) of K\u00f6nigsberg (now Kaliningrad, Russia) in East Prussia and later, on 22 May 1930, as mayor of Leipzig. During the Weimar Republic era (1918-1933), Goerdeler was widely regarded as a hard-working and outstanding municipal politician.\n\nOn 8 December 1931, Chancellor Heinrich Br\u00fcning, a personal friend, appointed Goerdeler as Reich Price Commissioner and entrusted him with the task of overseeing his deflationary policies. The sternness with which Goerdeler administered his task as Price Commissioner made him a well-known figure in Germany. Later he resigned from the DNVP because its leader, Alfred Hugenberg, was a committed foe of the Br\u00fcning government.\n\nIn the early 1930s, Goerdeler became a leading advocate of the viewpoint that the Weimar Republic had failed, as shown by the Great Depression, and that a conservative revolution was needed to replace democracy.\n\nAfter the downfall of the Br\u00fcning government in 1932, Goerdeler was considered as a potential Chancellor. General Kurt von Schleicher sounded him out for the post but eventually Franz von Papen was chosen instead.\n\nAfter the fall of Br\u00fcning's government on 30 May 1932, Br\u00fcning himself recommended Goerdeler to President Paul von Hindenburg as his successor. Hindenburg rejected Goerdeler because of his former membership of the DNVP. From 1928, under the leadership of Alfred Hugenberg, the DNVP had waged a vituperative campaign against Hindenburg and had even labeled him as one of the \"November Criminals\" who had allegedly \"stabbed Germany in the back\" in 1918. As a result, by 1932, no current or even former member of the DNVP was acceptable to Hindenburg as chancellor.\n\nThe fall of Br\u00fcning led to Goerdeler's resignation as Price Commissioner. Later in 1932, Goerdeler refused an offer to serve in Papen's cabinet.", "doc_id": "8e77893c-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Boeing_CH-47_Chinook", "document": "The Boeing CH-47 Chinook is a tandem rotor helicopter developed by American rotorcraft company Vertol and manufactured by Boeing Vertol. The Chinook is a heavy-lift helicopter that is among the heaviest lifting Western helicopters. Its name, Chinook, is from the Native American Chinook people of Oregon and Washington state.\n\nThe Chinook was originally designed by Vertol, which had begun work in 1957 on a new tandem-rotor helicopter, designated as the Vertol Model 107 or V-107. Around the same time, the United States Department of the Army announced its intention to replace the piston engine\u2013powered Sikorsky CH-37 Mojave with a new, gas turbine\u2013powered helicopter. During June 1958, the U.S. Army ordered a small number of V-107s from Vertol under the YHC-1A designation; following testing, it came to be considered by some Army officials to be too heavy for the assault missions and too light for transport purposes. While the YHC-1A would be improved and adopted by the U.S. Marine Corps as the CH-46 Sea Knight, the Army sought a heavier transport helicopter, and ordered an enlarged derivative of the V-107 with the Vertol designation Model 114. Initially designated as the YCH-1B, on 21 September 1961, the preproduction rotorcraft performed its maiden flight. In 1962, the HC-1B was redesignated CH-47A under the 1962 United States Tri-Service aircraft designation system.\n\nThe Chinook possesses several means of loading various cargoes, including multiple doors across the fuselage, a wide loading ramp located at the rear of the fuselage and a total of three external ventral cargo hooks to carry underslung loads. Capable of a top speed of 170 knots (200 mph; 310 km/h), upon its introduction to service in 1962, the helicopter was considerably faster than contemporary 1960s utility helicopters and attack helicopters, and is still one of the fastest helicopters in the US inventory. Improved and more powerful versions of the Chinook have also been developed since its introduction; one of the most substantial variants to be produced was the CH-47D, which first entered service in 1982; improvements from the CH-47C standard included upgraded engines, composite rotor blades, a redesigned cockpit to reduce workload, improved and redundant electrical systems and avionics, and the adoption of an advanced flight control system. It remains one of the few aircraft to be developed during the early 1960s \u2013 along with the fixed-wing Lockheed C-130 Hercules cargo aircraft \u2013 that has remained in both production and frontline service for over 60 years.\n\nThe military version of the helicopter has been exported to nations across the world; the U.S. Army and the Royal Air Force (see Boeing Chinook (UK variants)) have been its two largest users. The civilian version of the Chinook is the Boeing Vertol 234. It has been used by civil operators not only for passenger and cargo transport, but also for aerial firefighting and to support logging, construction, and oil extraction industries.\n\nDuring late 1956, the United States Department of the Army announced its intention to replace the Sikorsky CH-37 Mojave, which was powered by piston engines, with a new, gas turbine-powered helicopter. Turbine engines were also a key design feature of the smaller UH-1 \"Huey\" utility helicopter. Following a design competition, in September 1958, a joint Army\u2013Air Force source selection board recommended that the Army procure the Vertol-built medium transport helicopter. However, funding for full-scale development was not then available, and the Army vacillated on its design requirements. Some officials in Army Aviation thought that the new helicopter should be operated as a light tactical transport aimed at taking over the missions of the old piston-engined Piasecki H-21 and Sikorsky H-34 helicopters, and be consequently capable of carrying about 15 troops (one squad). Another faction in Army Aviation thought that the new helicopter should be much larger, enabling it to airlift large artillery pieces and possess enough internal space to carry the new MGM-31 \"Pershing\" missile system.\n\nDuring 1957, Vertol commenced work upon a new tandem-rotor helicopter, designated as the Vertol Model 107 or V-107. During June 1958, the U.S. Army awarded a contract to Vertol for the acquisition of a small number of the rotorcraft, giving it the YHC-1A designation. As ordered, the YHC-1A possessed the capacity to carry a maximum of 20 troops. Three underwent testing by the Army for deriving engineering and operational data. However, the YHC-1A was considered by many figures within the Army users to be too heavy for the assault role, while too light for the more general transport role. Accordingly, a decision was made to procure a heavier transport helicopter, and at the same time, upgrade the UH-1 \"Huey\" to serve as the needed tactical troop transport. The YHC-1A would be improved and adopted by the Marines as the CH-46 Sea Knight in 1962. As a result, the Army issued a new order to Vertol for an enlarged derivative of the V-107, known by internal company designation as the Model 114, which it gave the designation of HC-1B. On 21 September 1961, the preproduction Boeing Vertol YCH-1B made its initial hovering flight. During 1962, the HC-1B was redesignated the CH-47A under the 1962 United States Tri-Service aircraft designation system; it was also named \"Chinook\" after the Chinook people of the Pacific Northwest.\n\nThe CH-47 is powered by two Lycoming T55 turboshaft engines, mounted on each side of the helicopter's rear pylon and connected to the rotors by drive shafts. Initial models were fitted with engines rated at 2,200 hp (1,600 kW) each. The counter-rotating rotors eliminate the need for an antitorque vertical rotor, allowing all power to be used for lift and thrust. The ability to adjust lift in either rotor makes it less sensitive to changes in the center of gravity, important for the cargo lifting and dropping. While hovering over a specific location, a twin-rotor helicopter has increased stability over a single rotor when weight is added or removed, for example, when troops drop from or begin climbing up ropes to the aircraft, or when other cargo is dropped. If one engine fails, the other can drive both rotors. The \"sizing\" of the Chinook was directly related to the growth of the Huey and the Army's tacticians' insistence that initial air assaults be built around the squad. The Army pushed for both the Huey and the Chinook, and this focus was responsible for the acceleration of its air mobility effort.\n\nImproved and more powerful versions of the CH-47 have been developed since the helicopter entered service. The U.S. Army's first major design leap was the now-common CH-47D, which entered service in 1982. Improvements from the CH-47C included upgraded engines, composite rotor blades, a redesigned cockpit to reduce pilot workload, improved and redundant electrical systems, an advanced flight control system, and improved avionics. The latest mainstream generation is the CH-47F, which features several major upgrades to reduce maintenance, digitized flight controls, and is powered by two 4,733-horsepower (3,529 kW) Honeywell engines.\n\nA commercial model of the Chinook, the Boeing-Vertol Model 234, is used worldwide for logging, construction, fighting forest fires, and supporting petroleum extraction operations. In December 2006, Columbia Helicopters Inc purchased the type certificate of the Model 234 from Boeing. The Chinook has also been licensed to be built by companies outside the United States, such as Agusta (now AgustaWestland) in Italy and Kawasaki in Japan.", "doc_id": "8e778aae-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Unicode_character_property", "document": "The Unicode Standard assigns various properties to each Unicode character and code point.\n\nThe properties can be used to handle characters (code points) in processes, like in line-breaking, script direction right-to-left or applying controls. Some \"character properties\" are also defined for code points that have no character assigned and code points that are labeled like \"\". The character properties are described in Standard Annex #44.\n\nProperties have levels of forcefulness: normative, informative, contributory, or provisional. For simplicity of specification, a character property can be assigned by specifying a continuous range of code points that have the same property.\n\nA Unicode character is assigned a unique Name (na). The name is composed of uppercase letters A\u2013Z, digits 0\u20139, hyphen-minus (-) and space ( ). Some sequences are excluded: names beginning with a space or hyphen, names ending with a space or hyphen, repeated spaces or hyphens, and space after hyphen are not allowed. The name is guaranteed to be unique within Unicode, and can be used to identify a code point and its character. Ideographic characters, of which there are tens of thousands, are named in the pattern \"cjk unified ideograph-hhhh\". For example, U+4E00 \u4e00 CJK UNIFIED IDEOGRAPH-4E00. Formatting characters are named too: U+00A0 NO-BREAK SPACE.\n\nThe following classes of code point do not have a Name (na=\"\"): Controls (General Category: Cc), Private use (Co), Surrogate (Cs), Non-characters (Cn) and Reserved (Cn). They may be referenced, informally, by a generic or specific meta-name, called \"Code Point Labels\": , , , , , or . Since these labels contain <>-brackets, they can never appear as a Name, which prevents confusion.\n\nIn version 2.0 of Unicode, many names were changed. From then on the rule \"a name will never change\" came into effect, including the strict (normative) use of alias names. Disused version 1.0-names were moved to the property Alias, to provide some backward compatibility.\n\nSix character properties pertain to bi-directional writing: Bidi_Class, Bidi_Control, Bidi_Mirrored, Bidi_Mirroring_Glyph, Bidi_Paired_Bracket and Bidi_Paired_Bracket_Type.\n\nOne of Unicode's major features is support of bi-directional (Bidi) text display right-to-left (R-to-L) and left-to-right (L-to-R). The Unicode Bidirectional Algorithm UAX9 describes the process of presenting text with altering script directions. For example, it enables a Hebrew quote in an English text. The Bidi_Character_Type marks a character's behaviour in directional writing. To override a direction, Unicode has defined special formatting control characters (Bidi-Controls). These characters can enforce a direction, and by definition only affect bi-directional writing.\n\nIn normal situations, the algorithm can determine the direction of a text by this character property. To control more complex Bidi situations, e.g. when an English text has a Hebrew quote, extra options are added to Unicode. Twelve characters have the property Bidi_Control=Yes: ALM, FSI, LRE, LRI, LRM, LRO, PDF, PDI, RLE, RLI, RLM and RLO as named in the table. These are invisible formatting control characters, only used by the algorithm and with no effect outside of bidirectional formatting.[18] Despite the name, they are formatting characters, not control characters, and have General category \"Other, format (Cf)\" in the Unicode definition.\n\nBasically, the algorithm determines a sequence of characters with the same strong direction type (R-to-L or L-to-R), taking in account an overruling by the special Bidi-controls. Number strings (Weak types) are assigned a direction according to their strong environment, as are Neutral characters. Finally, the characters are displayed per a string's direction.\n\nTwo character properties are relevant to determining a mirror image of a glyph in bidirectional text: Bidi_Mirrored=Yes indicates that the glyph should be mirrored when written R-to-L. The property Bidi_Mirroring_Glyph=U+hhhh can then point to the mirrored character. For example, brackets \"()\" are mirrored this way. Shaping cursive scripts such as Arabic, and mirroring glyphs that have a direction, is not part of the algorithm.\n\nCharacters are classified with a Numeric type. Characters such as fractions, subscripts, superscripts, Roman numerals, currency numerators, encircled numbers, and script-specific digits are type Numeric. They have a numeric value that can be decimal, including zero and negatives, or a vulgar fraction. If there is not such a value, as with most of the characters, the numeric type is \"None\".\n\nThe characters that do have a numeric value are separated in three groups: Decimal (De), Digit (Di) and Numeric (Nu, i.e. all other). \"Decimal\" means the character is a straight decimal digit. Only characters that are part of a contiguous encoded range 0..9 have numeric type Decimal. Other digits, like superscripts, have numeric type Digit. All numeric characters like fractions and Roman numerals end up with the type \"Numeric\". The intended effect is that a simple parser can use these decimal numeric values, without being distracted by say a numeric superscript or a fraction. Seventy-three CJK Ideographs that represent a number, including those used for accounting, are typed Numeric.\n\nOn the other hand, characters that could have a numeric value as a second meaning are still marked Numeric type \"None\", and have no numeric value (\"\"). E.g. Latin letters can be used in paragraph numbering like \"II.A.1.b\", but the letters \"I\", \"A\" and \"b\" are not numeric (type \"None\") and have no numeric value.", "doc_id": "8e778c3e-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/History_of_Plaid_Cymru", "document": "Plaid Cymru; The Party of Wales originated in 1925 after a meeting held at that year's National Eisteddfod in Pwllheli, Caernarfonshire (now Gwynedd). Representatives from two Welsh nationalist groups founded the previous year, Byddin Ymreolwyr Cymru (\"Army of Welsh Home Rulers\") and Y Mudiad Cymreig (\"The Welsh Movement\"), agreed to meet and discuss the need for a \"Welsh party\". The party was founded as Plaid Genedlaethol Cymru, the National Party of Wales, and attracted members from the left, right and centre of the political spectrum, including both monarchists and republicans. Its principal aims include the promotion of the Welsh language and the political independence of the Welsh nation.\n\nAlthough Saunders Lewis is regarded as the founder of Plaid Cymru, the historian John Davies argues that the ideas of the left-wing activist D. J. Davies, which were adopted by the party's president Gwynfor Evans after the Second World War, were more influential in shaping its ideology in the long term. According to the historian John Davies, D. J. Davies was an \"equally significant figure\" as was Lewis in the history of Welsh nationalism, but it was Lewis's \"brilliance and charismatic appeal\" which was firmly associated with Plaid in the 1930s.\n\nAfter initial success as an educational pressure group, the events surrounding T\u00e2n yn Ll\u0177n (Fire in Ll\u0177n) in the 1930s led to the party adopting a pacifist political doctrine. Protests against the flooding of Capel Celyn in the 1950s further helped define its politics. These early events were followed by Evans's election to Parliament as the party's first Member of Parliament (MP) in 1966, the successful campaigning for the Welsh Language Act of 1967 and Evans going on hunger strike for a dedicated Welsh-language television channel in 1981.\n\nPlaid Cymru is the third largest political party in Wales, with 11 of 60 seats in the Senedd. From 2007 to 2011, it was the junior partner in the One Wales coalition government, with Welsh Labour. Plaid held one of the four Welsh seats in the European Parliament, holds four of the 40 Welsh seats in the UK Parliament, and it has 203 of 1,253 principal local authority councillors. According to accounts filed with the Electoral Commission for the year 2018, the party had an income of around \u00a3690,000 and an expenditure of about \u00a3730,000.\n\nThere had been discussions about the need for a \"Welsh party\" since the 19th century. With the generation or so before 1922 there \"had been a marked growth in the constitutional recognition of the Welsh nation\", wrote historian Dr John Davies. A Welsh national consciousness re-emerged during the 19th century; leading to the establishment of the National Eisteddfod in 1861, the University of Wales (Prifysgol Cymru) in 1893, and the National Library of Wales (Llyfrgell Genedlaethol Cymru) in 1911, and by 1915 the Welsh Guards (Gwarchodlu Cymreig) was formed to include Wales in the UK national components of the Foot Guards. By 1924 there were people in Wales \"eager to make their nationality the focus of Welsh politics\".\n\nSupport for home rule for Wales and Scotland amongst most political parties was strongest in 1918 following the independence of other European countries after the First World War, and the Easter Rising in Ireland, wrote Dr Davies. However, in the UK General Elections of 1922, 1923, and 1924; \"Wales as a political issue was increasingly eliminated from the [national agenda]\". By August 1925 unemployment in Wales had risen to 28.5%, in contrast to the economic boom in the early 1920s. For Wales, the long depression began in 1925.\n\nIt was in this climate that the Welsh Home Rulers group and the Welsh Movement met. Both organisations sent a delegation of three to the meeting, with H. R. Jones heading the Welsh Home Rulers group and Saunders Lewis heading The Welsh Movement. They were joined by Lewis Valentine, D.J. Williams, and Ambrose Bebb, among others. The principal aim of the party was to foster a Welsh-speaking Wales. To this end it was agreed that party business be conducted in Welsh, and that members sever all links with other British parties. Lewis insisted on these principles before he would agree to the Pwllheli conference.\n\nAccording to the 1911 census, out of a total population of Wales of just under 2.5 million, 43.5% spoke Welsh as a primary language. This was a decrease from the 1891 census with 54.4% speaking Welsh out of a population of 1.5 million.\n\nIn these circumstances Lewis condemned \"'Welsh nationalism' as it had hitherto existed, a nationalism characterised by inter-party conferences, an obsession with Westminster and a willingness to accept a subservient position for the Welsh language\", wrote Dr Davies. It may be because of these strict positions that the party failed to attract politicians of experience in its early years. However, the party's members believed its founding was an achievement in itself; \"merely by existing, the party was a declaration of the distinctiveness of Wales\", wrote Dr Davies.\n\nIn these early years Plaid Genedlaethol Cymru published a monthly paper called Y Ddraig Goch (the Red Dragon, the national symbol of Wales) and held an annual summer school.\n\nH. R. Jones, the party's full-time secretary, established a few party branches, while Valentine served as party president between 1925 and 1926. In the UK General Election of 1929, Valentine stood for Caernarfon and polled 609 votes. Later they became known as 'the Gallant Six Hundred' when Dafydd Iwan immortalised them in song.\n\nBy 1932 the aims of self government and Welsh representation at the League of Nations had been added to that of preserving Welsh language and culture. However, this move, and the party's early attempts to develop an economic critique, did not lead to the broadening of its appeal beyond that of an intellectual and socially conservative Welsh-language pressure group.", "doc_id": "8e778d9c-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/2007_Pacific_typhoon_season", "document": "The 2007 Pacific typhoon season was a below average season which featured 24 named storms, fourteen typhoons, and five super typhoons. It was an event in the annual cycle of tropical cyclone formation, in which tropical cyclones form in the western Pacific Ocean. The season ran throughout 2007, though most tropical cyclones typically develop between May and November. The season's first named storm, Kong-rey, developed on March 30, while the season's last named storm, Mitag, dissipated on November 27. The season's first typhoon, Yutu, reached typhoon status on May 18, and became the first super typhoon of the year on the next day.\n\nThe scope of this article is limited to the Pacific Ocean, to the north of the equator between 100\u00b0E and the 180th meridian. Within the northwestern Pacific Ocean, there are two separate agencies that assign names to tropical cyclones, which can often result in a cyclone having two names. The Japan Meteorological Agency (JMA) will name a tropical cyclone should it be judged to have 10-minute sustained wind speeds of at least 65 km/h (40 mph) anywhere in the basin. PAGASA assigns unofficial names to tropical cyclones which move into or form as a tropical depression in their area of responsibility, located between 115\u00b0E\u2013135\u00b0E and between 5\u00b0N\u201325\u00b0N, regardless of whether or not a tropical cyclone has already been given a name by the JMA. Tropical depressions that are monitored by the United States' Joint Typhoon Warning Center (JTWC) are given a numerical designation with a \"W\" suffix.\n\nOn March 26, the JTWC identified a broad area of low pressure in the Western North Pacific. It moved west-northwestward over the next few days, slowly gaining organization. According to the Japan Meteorological Agency, it became a tropical depression on March 30. The next day, the Joint Typhoon Warning Center issued a Tropical Cyclone Formation Alert due to an increased consolidation of the low-level circulation of the system. The JTWC issued its first warning on Tropical Depression 01W late that evening local time. As it continued to strengthen, the JTWC upgraded it to a tropical storm, the first of the season. The JMA followed suit, and named the system Kong-rey. The name was submitted by Cambodia, and refers to a character in a Khmer legend, which is also the name of a mountain.\n\nKong-rey continued to organize and intensified into a severe tropical storm early the next morning local time. The JTWC then upgraded it to a typhoon on April 2. As the system took a more poleward track towards the Northern Mariana Islands, the National Weather Service office in Guam noted that damaging winds were now not expected on the island. Elsewhere in the Marianas, preparations were made and flights were cancelled in anticipation of the typhoon. Kong-rey passed through the islands in the early hours of the morning on April 3 local time. The JMA upgraded Kong-rey to a typhoon later that afternoon, as it developed an eye. It strengthened slightly further before encountering wind shear and colder sea surface temperatures and was downgraded back to a severe tropical storm on April 4. As Kong-rey accelerated towards the northeast, it began undergoing extratropical transition early on April 5 and the JTWC issued its final warning. The JMA issued its final warning on the morning of April 6 after it had completed extratropical transition. No casualties or major damage was reported.\n\nOn May 15, a significant consolidation of organisation in a tropical disturbance located south-southeast of Guam led to Dvorak technique numbers equating to a windspeed of 45 knots (83 km/h) from the Air Force Weather Agency. Later that day, the Japan Meteorological Agency designated the system a tropical depression, and the Joint Typhoon Warning Center issued a Tropical Cyclone Formation Alert. The next day, the JMA began issuing full advisories on the tropical depression. It developed slowly, resulting in a reissuance of the TCFA later that day. In this second TCFA, the JTWC noted \"an increasingly well-defined\" low-level circulation centre. The JTWC upgraded the system to Tropical Depression 02W at 1200 UTC, based on satellite intensity estimates and QuikSCAT. The JMA designated 02W as Tropical Storm Yutu early on May 17, as the system strengthened further. The name 'Yutu' was contributed by China, and refers to a rabbit in a Chinese fable. The JTWC followed suit 3 hours later, upgrading the system to Tropical Storm 02W as it moved quickly westwards, heading for Yap. Tropical storm warnings and watches were put in place for most of the Yap State, but were later cancelled after Yutu passed through quickly.\n\nIt then took a northwesterly turn, entered the PAGASA area of responsibility on May 18 as it reached severe tropical storm strength, and was named \"Amang\". Later that day, the JTWC upgraded it to a typhoon, and identified a \"distinct eye feature\", and the JMA upgraded the severe tropical storm to a typhoon at 1800 UTC as it continued to intensify. It began to recurve towards Iwo Jima, undergoing rapid intensification, with \"enhanced poleward outflow and low vertical wind shear\". It reached its peak on the evening of May 20, as a strong Category 4-equivalent typhoon, just short of becoming a super typhoon. Despite moving into cooler waters, its strong poleward outflow helped it to maintain a high intensity, while carrying a 20 nautical mile-wide eye, on the early morning of May 21. It then began to gradually weaken, passing over Okinotorishima and near Iwo Jima that day as it sped off to the northeast. Maximum winds on Iwo Jima occurred around 1500 UTC that day, with 66 kt (122 km/h, 76 mph) sustained gusting to 104 kt (193 km/h, 120 mph), when a minimum central pressure of 976 hPa was recorded. It then started extratropical transition, and the JTWC issued its final warning on the morning of May 22. The JMA issued its last advisory after extratropical transition completed a day later.", "doc_id": "8e778ed2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/The_Settlers", "document": "The Settlers (German: Die Siedler) is a city-building and real-time strategy video game series created by Volker Wertich. The original game was released on the Commodore Amiga in 1993, with subsequent games released primarily on MS-DOS and Microsoft Windows: The Settlers II (1996), The Settlers III (1998), The Settlers IV (2001), The Settlers: Heritage of Kings (2004), The Settlers: Rise of an Empire (2007), and The Settlers 7: Paths to a Kingdom (2010). There are also several spin-offs; The Settlers II (10th Anniversary) (2006) is a remake of The Settlers II, The Settlers DS (2007) is a port of The Settlers II for Nintendo DS, Die Siedler: Aufbruch der Kulturen (2008) is a German-only spiritual successor to 10th Anniversary, The Settlers HD (2009) is a handheld remake of The Settlers IV, and The Settlers Online (2010) is a free-to-play online browser game. With the exception of The Settlers HD, Blue Byte has developed every game in the series, as well as publishing the first three titles. From The Settlers IV onwards, Ubisoft has published all titles.\n\nAn eighth game in the main series, The Settlers: Kingdoms of Anteria, was scheduled for release in 2014, but after the game's closed beta was abruptly shut down by Ubisoft in light of negative feedback, the game was removed from the release schedule. It was ultimately repackaged and released in 2016 as Champions of Anteria, an action role-playing game unrelated to The Settlers series. A franchise reboot, named simply The Settlers, was scheduled for release in 2019, but was postponed and all preorders were refunded. In January 2022, Ubisoft announced that the game would be released in March of that year. In March then, it was again postponed.\n\nNarratively, each game is a stand-alone story with no connection to the other titles in the series (although Rise of an Empire is an indirect sequel to Heritage of Kings). From a gameplay perspective, although each game tends to feature its own set of innovations and nuances, broadly speaking, they are all built on a simulation of a supply and demand economic system in which the player must maintain the various chains of production, building up their military strength and the robustness of their economy so as to defeat their opponents and achieve certain predetermined objectives. Some games foreground city-building and complex daisy-chain economic processes whereas others focus on real-time strategy and building a diverse military force. Common game mechanics include resource acquisition, economic micromanagement, managing taxation, maintaining a high standard of living, trade, and technology trees.\n\nCritically, reactions to the games have been mixed, ranging from universal praise for The Settlers II to universal condemnation for The Settlers DS. The series has sold very well, with global sales in excess of 10 million units as of September 2014. It has sold especially well in Europe. The games have also done well at various game award shows, and the series features two recipients of the \"Best Game\" award at the annual German Developer Innovation Prize.\n\nThe core elements of The Settlers' gameplay are city-building and real-time strategy. Both the original Settlers and The Settlers II are city-building games with real-time strategy elements, and have similar gameplay and game mechanics. Unlike the first two games, The Settlers III and The Settlers IV foreground real-time strategy elements over city-building, with more focus on combat than their predecessors. The Settlers: Heritage of Kings is unique in the franchise insofar as it focuses almost exclusively on real-time strategy and combat. After Heritage of Kings received a negative reaction from fans, the next game, The Settlers: Rise of an Empire, returned to foregrounding city-building over real-time strategy. This was true to an even greater degree in the following game, The Settlers 7: Paths to a Kingdom, whose gameplay was based on the most popular title in the series; The Settlers II.\n\nIn the first five games, the primary goal on each map, broadly speaking, is to build a settlement with a functioning economy, producing sufficient military units so as to conquer rival territories. To achieve this end, the player must, to one degree or another, engage in economic micromanagement, construct buildings, and generate resources. In Rise of an Empire and Paths to a Kingdom, the importance of military conquest is scaled back, with many maps requiring players to accomplish certain predetermined tasks tied to the economic strength of their city. Whilst the first four games feature broadly similar supply and demand-based gameplay, starting with Heritage of Kings, Blue Byte began to alter the game mechanics from title to title. So, with Heritage of Kings, there is little focus on micromanagement, daisy-chain economic processes, or construction, and more on technology trees, combat, taxation, and workers' motivation. Rise of an Empire features a significantly simpler economic model than any previous title in the series, with the complexity of the various supply chains significantly streamlined. Paths to a Kingdom features a more robust economy and focuses on micromanagement, daisy-chain economic processes, city organisation, upgrading buildings, technology trees, and trade requirements. Additionally, for the first time in the series, the gameplay in Paths to a Kingdom is flexible enough to allow players to develop their settlement based upon one (or more) of three basic options - military, technology or trade.\n\nThe gameplay of every Settlers title revolves around serfs (the titular \"settlers\"). In all games except Heritage of Kings, serfs transport materials, tools and produce, and populate and perform the requisite task of each building. In Heritage, serfs are differentiated from workers - serfs are the only units capable of constructing new buildings, repairing damage to pre-existing buildings, gathering wood, and extracting resources by hand, whereas workers occupy buildings. In no game except Heritage does the player directly control any individual settler - instead, the player issues general orders, with the AI handling the delegation to specific settlers. In Heritage of Kings, the player can directly control serfs, such as ordering them to chop down trees in a particular location, or scout unexplored territory. Workers, however, cannot be controlled.\n\nIn The Settlers, The Settlers II, Rise of an Empire, and Paths to a Kingdom, as the player constructs buildings and thus requires settlers to occupy them, the settlers are automatically generated as needed. In The Settlers and The Settlers II, as the settlement continues to grow in size, the quota of settlers will eventually be reached, and the player will need to build a warehouse to generate more settlers. In Rise of an Empire and Paths to a Kingdom, once the settlement's quota has been reached, new settlers can only generate once the player has increased living space, either by building new residences or upgrading existing ones. In both The Settlers III and The Settlers IV, new settlers aren't generated as needed; instead, a set number is added to the player's pool upon the construction of residences. In Heritage of Kings, the player manually recruits serfs as needed. Once the serfs have constructed a building which requires workers, those workers will automatically emerge from the village centre and occupy the building. Once the settlement's quota has been reached, the player must either upgrade the centre or build an additional one.\n\nIn the first two games, an important game mechanic is the construction of a road network so as to allow for an efficient transportation system, as any settlers transporting goods must use roads. To maximize distribution, the player must set as many flags as possible on each road. Flags can only be set a certain distance apart, and serve as transport hubs; a settler will carry an item to a flag and set it down, at which point the next settler along will pick up the item and continue, freeing the first settler to return and pick up another item at the previous flag. A major change came in The Settlers III, where roads were no longer necessary, and settlers could walk freely around the player's territory, with the AI handling pathfinding. Aside from The Settlers II (10th Anniversary), roads were not a requirement again until Paths to a Kingdom (players could build roads in Rise of an Empire, but they were optional).", "doc_id": "8e779008-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Ralph_Richardson", "document": "Sir Ralph David Richardson (19 December 1902 \u2013 10 October 1983) was an English actor who, with John Gielgud and Laurence Olivier, was one of the trinity of male actors who dominated the British stage for much of the 20th century. He worked in films throughout most of his career, and played more than sixty cinema roles. From an artistic but not theatrical background, Richardson had no thought of a stage career until a production of Hamlet in Brighton inspired him to become an actor. He learned his craft in the 1920s with a touring company and later the Birmingham Repertory Theatre. In 1931 he joined the Old Vic, playing mostly Shakespearean roles. He led the company the following season, succeeding Gielgud, who had taught him much about stage technique. After he left the company, a series of leading roles took him to stardom in the West End and on Broadway.\n\nIn the 1940s, together with Olivier and John Burrell, Richardson was the co-director of the Old Vic company. There, his most celebrated roles included Peer Gynt and Falstaff. He and Olivier led the company to Europe and Broadway in 1945 and 1946, before their success provoked resentment among the governing board of the Old Vic, leading to their dismissal from the company in 1947. In the 1950s, in the West End and occasionally on tour, Richardson played in modern and classic works including The Heiress, Home at Seven, and Three Sisters. He continued on stage and in films until shortly before his sudden death at the age of eighty. He was celebrated in later years for his work with Peter Hall's National Theatre and his frequent stage partnership with Gielgud. He was not known for his portrayal of the great tragic roles in the classics, preferring character parts in old and new plays.\n\nRichardson's film career began as an extra in 1931. He was soon cast in leading roles in British and American films including Things to Come (1936), The Fallen Idol (1948), Long Day's Journey into Night (1962) and Doctor Zhivago (1965). He received nominations and awards in the UK, Europe and the US for his stage and screen work from 1948 until his death. Richardson was twice nominated for the Academy Award for Best Supporting Actor, first for The Heiress (1949) and again (posthumously) for his final film, Greystoke: The Legend of Tarzan, Lord of the Apes (1984).\n\nThroughout his career, and increasingly in later years, Richardson was known for his eccentric behaviour on and off stage. He was often seen as detached from conventional ways of looking at the world, and his acting was regularly described as poetic or magical.\n\nRichardson was born in Cheltenham, Gloucestershire, the third son and youngest child of Arthur Richardson and his wife Lydia (n\u00e9e Russell). The couple had met while both were in Paris, studying with the painter William-Adolphe Bouguereau. Arthur Richardson had been senior art master at Cheltenham Ladies' College from 1893.\n\nIn 1907 the family split up; there was no divorce or formal separation, but the two elder boys, Christopher and Ambrose, remained with their father and Lydia left them, taking Ralph with her. The ostensible cause of the couple's separation was a row over Lydia's choice of wallpaper for her husband's study. According to John Miller's biography, whatever underlying causes there may have been are unknown. An earlier biographer, Garry O'Connor, speculates that Arthur Richardson might have been having an extramarital affair. There does not seem to have been a religious element, although Arthur was a dedicated Quaker, whose first two sons were brought up in that faith, whereas Lydia was a devout convert to Roman Catholicism, in which she raised Ralph. Mother and son had a variety of homes, the first of which was a bungalow converted from two railway carriages in Shoreham-by-Sea on the south coast of England.\n\nLydia wanted Richardson to become a priest. In Brighton he served as an altar boy, which he enjoyed, but when sent at about fifteen to the nearby Xaverian College, a seminary for trainee priests, he ran away. As a pupil at a series of schools he was uninterested in most subjects and was an indifferent scholar. His Latin was poor, and during church services he would improvise parts of the Latin responses, developing a talent for invention when memory failed that proved useful in his later career.\n\nIn 1919, aged sixteen, Richardson took a post as office boy with the Brighton branch of the Liverpool Victoria insurance company. The pay, ten shillings a week, was attractive, but office life was not; he lacked concentration, frequently posting documents to the wrong people as well as engaging in pranks that alarmed his superiors. His paternal grandmother died and left him \u00a3500, which, he later said, transformed his life. He resigned from the office post, just in time to avoid being dismissed, and enrolled at the Brighton School of Art. His studies there convinced him that he lacked creativity, and that his drawing skills were not good enough.\n\nRichardson left the art school in 1920, and considered how else he might make a career. He briefly thought of pharmacy and then of journalism, abandoning each when he learned how much study the former required and how difficult mastering shorthand for the latter would be. He was still unsure what to do, when he saw Sir Frank Benson as Hamlet in a touring production. He was thrilled, and felt at once that he must become an actor.\n\nButtressed by what was left of the legacy from his grandmother, Richardson determined to learn to act. He paid a local theatrical manager, Frank R. Growcott, ten shillings a week to take him as a member of his company and to teach him the craft of an actor. He made his stage debut in December 1920 with Growcott's St Nicholas Players at the St Nicholas Hall, Brighton, a converted bacon factory. He played a gendarme in an adaptation of Les Mis\u00e9rables and was soon entrusted with larger parts, including Banquo in Macbeth and Malvolio in Twelfth Night.", "doc_id": "8e779134-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/South_Africa_national_soccer_team", "document": "The South Africa national soccer team represents South Africa in men's international soccer and it is run by the South African Football Association, the governing body for Soccer in South Africa. The team's nickname is Bafana Bafana (The Boys), and South Africa's home ground is FNB Stadium, which is located in Johannesburg. The team's greatest result was winning the Africa Cup of Nations at home in 1996. The team is a member of both FIFA and Confederation of African Football (CAF).\n\nHaving played their first match in 1906, they returned to the world stage in 1992, after 16 years of being banned from FIFA, and 40 years of effective suspension due to the apartheid system.[7] South Africa became the first African nation to host the FIFA World Cup when it was granted host status for the 2010 edition. The team's Siphiwe Tshabalala was also the first player to score in this World Cup during the opening game against Mexico, which was followed by an iconic Macarena-style goal celebration from five South African players. Despite defeating France 2\u20131 in their final game of the Group Stage, they failed to progress from the first round of the tournament, becoming the first host nation in the history of the FIFA World Cup to exit in the group stage. Despite this, the team was ranked 20th out of 32 sides. As of the 23 June 2022, the team is ranked 12th in Africa (CAF) and has moved one spot up in the world (FIFA) and is currently in the 68th position.\n\nSoccer first arrived in South Africa through colonialism in the late nineteenth century, as the game was popular among British soldiers. From the earliest days of the sport in South Africa until the end of apartheid, organised soccer was affected by the country's system of racial segregation. The all-white Football Association of South Africa (FASA) was formed in 1892, while the South African Indian Football Association (SAIFA), the South African Bantu Football Association (SABFA) and the South African Coloured Football Association (SACFA) were founded in 1903, 1933 and 1936 respectively.\n\nIn 1903 the SAFA re-affiliated with the English Football Association after the Second Boer War between the British Empire and the Boer state. There was a plan to play a tournament held in Argentina, with South Africa and Fulham as guest teams, but it was not carried out. Nevertheless, South Africa traveled to South America in 1906 to play a series of friendly matches there.\n\nSouth Africa played a total of 12 matches in South America, winning 11 with 60 goals scored and only 7 conceded. Some of the rivals were Belgrano A.C., Argentina national team, a Liga Rosarina combined, Estudiantes (BA) and Quilmes. The only team that could beat South Africa was the Argentine Alumni by 1\u20130 at Sociedad Sportiva stadium of Buenos Aires, on 24 June, although the South Africans would take revenge on 22 July, defeating the Alumni by 2\u20130.\n\nThe players were exclusively white, civil servants, government employees, bankers and civil engineers. Seven of the 15 players were born in South Africa and 8 originated from England and Scotland.\n\nSouth Africa was one of four African nations to attend FIFA's 1953 congress, at which the four demanded, and won, representation on the FIFA executive committee. Thus the four nations (South Africa, Ethiopia, Egypt and Sudan) founded the Confederation of African Football in 1956, and the South African representative, Fred Fell, sat at the first meeting as a founding member. It soon became clear however that South Africa's constitution prohibited racially mixed teams from competitive sport, and so they could only send either an all-black side or an all-white side to the planned 1957 African Cup of Nations. This was unacceptable to the other members of the Confederation, and South Africa was disqualified from the competition, however, some sources say that they withdrew voluntarily.\n\nAt the second CAF conference in 1958, South Africa were formally expelled from CAF. The all-white FASA were admitted to FIFA in the same year, but in August 1960 it was given an ultimatum of one year to fall in line with the non-discriminatory regulations of FIFA. On 26 September 1961 at the annual FIFA conference, the South African association was formally suspended from FIFA. Sir Stanley Rous, president of The Football Association of England and a champion of South Africa's FIFA membership, was elected FIFA President a few days later. Rous was adamant that sport, and FIFA in particular, should not embroil itself in political matters and against fierce opposition, he continued to resist attempts to expel South Africa from FIFA. The suspension was lifted in January 1963 after a visit to South Africa by Rous in order to investigate the state of soccer in the country.\n\nRous declared that if the suspension were not lifted, soccer there would be discontinued, possibly to the point of no recovery. The next annual conference of FIFA in October 1964 took place in Tokyo and was attended by a larger contingent of representatives from African and Asian associations and here the suspension of South Africa's membership was re-imposed. In 1976, after the Soweto uprising, they were formally expelled from FIFA.\n\nIn 1991, when the apartheid system was beginning to be demolished, a new multi-racial South African Football Association was formed, and admitted to FIFA \u2013 and thus finally allowing South Africa to enter the qualifying stages for subsequent World Cups.", "doc_id": "8e779256-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Femme_Fatale_(Britney_Spears_album)", "document": "Femme Fatale is the seventh studio album by American pop singer Britney Spears. It was released on March 25, 2011, by Jive Records, and was her last album with the label before they shut down later that year as she was moved to RCA Records. Musically, Spears wanted to make a \"fresh-sounding\" and \"fierce dance album\", thus incorporating dance-pop, electropop, EDM and synth-pop styles with elements of dubstep, techno and electro in its sound. Spears began working on the album during the second leg of her tour The Circus Starring Britney Spears (2009), while also contributing to her second greatest hits album The Singles Collection (2009). Spears collaborated with various producers including Max Martin, Dr. Luke, William Orbit, Fraser T Smith, Rodney Jerkins, Bloodshy, will.i.am, Stargate, and Travis Barker.\n\nUpon its release, the album received generally favorable reviews from music critics, who complimented its production and dance-pop style. The album debuted atop the charts in Australia, Brazil, Canada, Mexico, Russia, South Korea and the United States, and peaked inside the top ten in 24 countries. In the United States, it was her sixth number-one album on the Billboard 200, extending her record as the artist with most consecutive number-one albums in the US. Femme Fatale is certified platinum by the Recording Industry Association of America (RIAA). As of February 2014, it had sold 2.4 million copies worldwide.\n\nFour singles were released. It became Spears' most successful era in the US charts, being her first album to score three top ten singles on the US Billboard Hot 100, with \"Hold It Against Me\", \"Till the World Ends\" and \"I Wanna Go\" peaking at numbers one, three and seven, respectively. The fourth and final single, \"Criminal\", peaked at number one in Brazil and within the top 20 in five countries. A resurgence in popularity for \"Criminal\" occurred when it went viral on TikTok in 2020, becoming one of her most streamed songs and fourth most liked music video on YouTube.\n\nSpears promoted the album with television performances, the Femme Fatale Tour, and collaborations (remixes) with Kesha, Nicki Minaj, and Rihanna.\n\nIn July 2009, Spears announced that she had begun recording new material with longtime collaborator Max Martin. Spears stated her desire to make the album \"fresh-sounding for the clubs or something that you play in your car when you're going out at night that gets you excited, but I wanted it to sound different from everything else out right now.\" Spears also stated that she wanted to make sure Femme Fatale was completely different from her previous studio album Circus (2008). After \"Hold It Against Me\" was written, originally, Luke and Martin wanted to give the track to Katy Perry, but they later decided that it wasn't the right fit for her. They continued to work on the song with Billboard, and Luke commented that before giving the song to Spears he wanted to make sure it sounded different from his previous recordings. Darkchild stated that while working with him, Spears was very \"hands-on\" and \"had a lot of ideas for [him].\" He later commented he had produced two songs for the album, with one of them featuring Travis Barker. Darkchild added that the song \"[has] this rock feel which is out of the box, out of my norm, and I think it's out of her norm as well.\"\n\nDr. Luke revealed in February 2011 that a final track listing had not yet been chosen. Later that month, Spears worked with will.i.am. Spears later commented that she is a fan of the Black Eyed Peas, and would love to work with will.i.am again the future. She also said that she discovered Sabi through a friend recommendation, and had always wanted to feature a new artist in one of her albums, hence they recorded \"(Drop Dead) Beautiful\". British producer Fraser T Smith worked with Spears on three tracks and complimented her work ethic, saying that her voice was powerful and that she focused on the music. William Orbit confirmed he had co-written a track for Spears with Klas \u00c5hlund, but it was left off the final track listing. Orbit stated that he was displeased with the decision, and commented, \"The Britney thing. Look, I went to a writing camp at Teresa's. Had lovely time. Word got out. Assumptions were made. Dr Luke is exec[utive] prod[ucer] and he locks in locks out whoever he likes. And (do [I] hear [you] ask) where B's at in all this? I surely don't know. [D]id a song [with] Klas Ahlund, who wrote 'Piece of Me'. And is killa. But not on [Femme Fatale] apparently. But a good song is a good song regardless.\"\n\nMusic writers noted electropop, dance-pop, EDM, and synth-pop styles on Femme Fatale. Music journalist Jody Rosen wrote of the album, \"Conceptually it's straightforward: a party record packed with sex and sadness\". The album was compared to Spears's previous albums, In the Zone (2003), Blackout (2007) and Circus (2008). Although Spears was criticized for her lack of involvement from the album's production and writing, she wrote the song \"Scary\", produced by Fraser T Smith, which was included on the Japanese deluxe edition of the album. The album opens with \"Till the World Ends\", co-written by Kesha, was described as an uptempo dance-pop and electropop song, with an electro beat and elements of techno and Eurodance. The song opens with sirens and a \"sizzling\" bassline. Critics complimented the song's \"anthemic nature\" and \"chant-like chorus\". The second track and lead single \"Hold It Against Me\" is a dance-pop song which features industrial beats, a dubstep-influenced breakdown and employing elements of grime and a final chorus with elements of rave. The lyrics portray the singer seducing someone on the dancefloor, while the chorus revolves around pick-up lines, with Spears singing: \"If I said I want your body now, would you hold it against me?\"[33] \"Hold It Against Me\" and Spears were complimented by Rick Florino of Artist direct for \"stepping into new territory and pushing the boundaries of dance-pop once more.\" The third track \"Inside Out\" is an electropop song. It features themes of dubstep and R&B, complemented with \"earth-shattering synths\". The song was praised for its intricate production and has been compared to her earlier work on albums In the Zone and Circus, and also to Janet Jackson and Madonna's album Ray of Light (1998) and song Music (2000). Spears crescendos: \"Baby shut your mouth and turn me inside out\" during the chorus section, and then goes on to \"Hit me one more time it's so amazing\" and \"You're the only one who's ever drove me crazy\", referencing her songs \"...Baby One More Time\" and \"(You Drive Me) Crazy\". \"I Wanna Go\", the fourth track, is a dance-pop and Hi-NRG song, that includes elements of techno and a heavy bassline. The song contains a whistled melody. In the chorus, she stutters: \"I-I-I wanna go-o-o / All the wa-a-ay / Taking out my freak tonight\". The \"builds and breaks\" were compared to her album Blackout.\n\n\"How I Roll\" is the fifth track, produced by Bloodshy, Henrik Jonback and Magnus, where Spears \"pirouettes from a gulping in-and-out breath effect\", and was described as a \"bubbly, playful pop song\". Spears' voice is heavily altered, with her voice being put through many distorters, filters, and blenders. The song uses constant rushed handclaps, with elements which were compared to Janet Jackson's \"Strawberry Bounce\". The sixth and seventh tracks \"(Drop Dead) Beautiful\" and \"Seal It With a Kiss\" were commented as \"fillers\" by Christopher Kostakis of Samesame.com.au. However, Keith Caulfield of Billboard states that \"with giggly lyrical couplings like 'your body looks so sick, I think I caught the flu' and 'you must be B.I.G. because you got me hypnotized' -- '[Drop Dead] Beautiful' doesn't take itself too seriously.\"\"Big Fat Bass\" is Femme Fatale's eighth track, and it was said that it \"sticks to dancefloor essentials\". The song was further noted as being catchy, but repetitive by Idolator. \"Trouble for Me\", the ninth song on the album, features a pre-chorus filled with \"melting, wheezing synths\" likened to a \"Wiley grime wobble,\" segueing into a \"Janet Jackson vocal.\" Spears' voice had been Auto-Tuned, but her voice was described as \"raw\" and the tones and wines as \"sexy\" and \"one of a kind\". \"Criminal\", the last track on the album's standard edition, is a guitar-driven midtempo song, which incorporates a folk-style flute melody. Erin Thompson of the Seattle Weekly said the song \"takes a breather from aggressive, wall-to-wall synths, driven instead by a steady guitar rhythm and an oddly Asian folky-sounding flute melody.\" In the verses, Spears sings about being in love with a bad boy and outlaw, in lyrics such as \"He is a hustler / He's no good at all / He is a loser, he's a bum, bum, bum, bum\" and \"He is a bad boy with a tainted heart / And even I know this ain't smart\". During the chorus, she pleads to her mother not to worry in lines such as \"But mama I'm in love with a criminal\" and \"Mama please don't cry / I will be alright.\"\n\nAccording to Billboard, \"Up n' Down\" \"heads back to the dance floor, where we find ourselves picturing an aggressive Spears going 'Up N' Down.'\" The fourteenth track, \"He About to Lose Me\" is a pop rock-influenced ballad stated as \"[packing] a serious emotional punch. Spears sings about being at the club, entranced by a new man she's made contact with -- all the while thinking of her current beau, who's at home. Will she leave the club with the new guy? Or will she go home to her man -- a guy she's not even all that sure loves her anymore?\" The final track on the deluxe version, \"Don't Keep Me Waiting\", has been described as \"a new wavey rock moment for Spears, where fuzzzed-out guitars are paired with what sound like live drums on the ready-for-the-arena track.\" The seventeenth and final track on the Japanese deluxe edition of the album \"Scary\" is another up-tempo dance song that finds Spears on the prowl. 'I just want your body, and I know that you want mine,' she sings. As the chorus opens up, Britney reveals the extent of her lust: \"It\u2019s scary, yeah / I think I need some hypnotherapy / I want you so bad it\u2019s scary.\u201d", "doc_id": "8e779508-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Omega-3_fatty_acid", "document": "Omega\u22123 fatty acids, also called Omega-3 oils, \u03c9\u22123 fatty acids or n\u22123 fatty acids, are polyunsaturated fatty acids (PUFAs) characterized by the presence of a double bond, three atoms away from the terminal methyl group in their chemical structure. They are widely distributed in nature, being important constituents of animal lipid metabolism, and they play an important role in the human diet and in human physiology. The three types of omega\u22123 fatty acids involved in human physiology are \u03b1-linolenic acid (ALA), eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA). ALA can be found in plants, while DHA and EPA are found in algae and fish. Marine algae and phytoplankton are primary sources of omega\u22123 fatty acids. DHA and EPA accumulate in fish that eat these algae. Common sources of plant oils containing ALA include walnuts, edible seeds, and flaxseeds, while sources of EPA and DHA include fish and fish oils, as well as algae oil.\n\nMammals are unable to synthesize the essential omega\u22123 fatty acid ALA and can only obtain it through diet. However, they can use ALA, when available, to form EPA and DHA, by creating additional double bonds along its carbon chain (desaturation) and extending it (elongation). Namely, ALA (18 carbons and 3 double bonds) is used to make EPA (20 carbons and 5 double bonds), which is then used to make DHA (22 carbons and 6 double bonds). The ability to make the longer-chain omega\u22123 fatty acids from ALA may be impaired in aging. In foods exposed to air, unsaturated fatty acids are vulnerable to oxidation and rancidity.\n\nThere is no high-quality evidence that dietary supplementation with omega\u22123 fatty acids reduces the risk of cancer or cardiovascular disease. Furthermore, fish oil supplement studies have failed to support claims of preventing heart attacks or strokes or any vascular disease outcomes.\n\nThe evidence linking the consumption of marine omega\u22123 fats to a lower risk of cancer is poor. With the possible exception of breast cancer, there is insufficient evidence that supplementation with omega\u22123 fatty acids has an effect on different cancers. The effect of consumption on prostate cancer is not conclusive. There is a decreased risk with higher blood levels of DPA, but possibly an increased risk of more aggressive prostate cancer was shown with higher blood levels of combined EPA and DHA. In people with advanced cancer and cachexia, omega\u22123 fatty acids supplements may be of benefit, improving appetite, weight, and quality of life.\n\nModerate and high quality evidence from a 2020 review showed that EPA and DHA, such as that found in omega\u22123 polyunsaturated fatty acid supplements, does not appear to improve mortality or cardiovascular health. There is weak evidence indicating that \u03b1-linolenic acid may be associated with a small reduction in the risk of a cardiovascular event or the risk of arrhythmia.\n\nA 2018 meta-analysis found no support that daily intake of one gram of omega\u22123 fatty acid in individuals with a history of coronary heart disease prevents fatal coronary heart disease, nonfatal myocardial infarction or any other vascular event. However, omega\u22123 fatty acid supplementation greater than one gram daily for at least a year may be protective against cardiac death, sudden death, and myocardial infarction in people who have a history of cardiovascular disease. No protective effect against the development of stroke or all-cause mortality was seen in this population. A 2018 study found that omega\u22123 supplementation was helpful in protecting cardiac health in those who did not regularly eat fish, particularly in the African American population. Eating a diet high in fish that contain long chain omega\u22123 fatty acids does appear to decrease the risk of stroke. Fish oil supplementation has not been shown to benefit revascularization or abnormal heart rhythms and has no effect on heart failure hospital admission rates. Furthermore, fish oil supplement studies have failed to support claims of preventing heart attacks or strokes. In the EU, a review by the European Medicines Agency of omega\u22123 fatty acid medicines containing a combination of an ethyl ester of eicosapentaenoic acid and docosahexaenoic acid at a dose of 1 g per day concluded that these medicines are not effective in secondary prevention of heart problems in patients who have had a myocardial infarction.\n\nEvidence suggests that omega\u22123 fatty acids modestly lower blood pressure (systolic and diastolic) in people with hypertension and in people with normal blood pressure. Omega\u22123 fatty acids can also reduce heart rate, an emerging risk factor. Some evidence suggests that people with certain circulatory problems, such as varicose veins, may benefit from the consumption of EPA and DHA, which may stimulate blood circulation and increase the breakdown of fibrin, a protein involved in blood clotting and scar formation. Omega\u22123 fatty acids reduce blood triglyceride levels, but do not significantly change the level of LDL cholesterol or HDL cholesterol. The American Heart Association position (2011) is that borderline elevated triglycerides, defined as 150\u2013199 mg/dL, can be lowered by 0.5\u20131.0 grams of EPA and DHA per day; high triglycerides 200\u2013499 mg/dL benefit from 1\u20132 g/day; and >500 mg/dL be treated under a physician's supervision with 2\u20134 g/day using a prescription product.[40] In this population omega\u22123 fatty acid supplementation decreases the risk of heart disease by about 25%.\n\nA 2019 review found that omega-3 fatty acid supplements make little or no difference to cardiovascular mortality and that patients with myocardial infarction yield no benefit in taking the supplements. A 2021 review found that omega-3 supplementation did not affect cardiovascular disease outcomes. A 2021 meta-analysis showed that use of marine omega-3 supplementation was associated with an increased risk of atrial fibrillation, with the risk appearing to increase for doses greater than one gram per day.", "doc_id": "8e779670-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/2015_NBA_playoffs", "document": "The 2015 NBA playoffs was the postseason tournament of the National Basketball Association's 2014\u201315 season. The tournament concluded with the Western Conference champion Golden State Warriors defeating the Eastern Conference champion Cleveland Cavaliers 4 games to 2 in the NBA Finals. Andre Iguodala was named NBA Finals MVP.\n\nFor the first time since 2005\u201306, all teams from a particular division made the playoffs (in this case, all five teams from the Southwest Division).\n\nThe San Antonio Spurs made their 18th straight playoff appearance, while the Atlanta Hawks (eighth straight playoff appearance) and the Golden State Warriors (third straight playoff appearance) entered the playoffs as the first seeds of their respective conferences. The Warriors and Hawks advanced to the Conference Finals for the first time since 1976 and 1970, respectively.\n\nThe Cleveland Cavaliers made their first postseason appearance since 2010, the final season of LeBron James' first stint with the Cavaliers. They also made their first Conference Finals appearance since 2009, where they lost 4\u20132 to the Orlando Magic, and their first Finals appearance since 2007, when they were swept by the San Antonio Spurs. On the other hand, James' former team, the Miami Heat, missed the playoffs after making the previous year's Finals, becoming the first team to do so since the 2005 Lakers. Miami had qualified for the playoffs for six consecutive seasons before missing this year, also reaching the NBA Finals four consecutive times. The Heat and their in-state rivals, the Orlando Magic, both missed the playoffs in the same season for the first time since 1993.\n\nThe Oklahoma City Thunder and the Indiana Pacers were conference finalists a year ago but failed to make the playoffs. Oklahoma City and Indiana were tied with the New Orleans Pelicans and the Brooklyn Nets with 45 and 38 wins, respectively, but missed the playoffs due to tiebreakers.\n\nDespite starting their respective seasons in a rebuilding mode, both the Milwaukee Bucks and the Boston Celtics returned to the playoffs after a one-year absence. Bucks head coach Jason Kidd became the first head coach to lead two teams to the playoffs in his first two seasons, having led the Nets to the playoffs the previous season.\n\nThe first round of the playoffs saw a record six teams take a 3\u20130 lead in their respective series, the first time it had happened since the first round expanded to a best-of-seven series in 2003.\n\nThe fifth seed defeated the fourth seed in both conferences for the third straight year.\n\nGame 7 between the Clippers and Spurs ensured a 16th straight postseason in which at least one Game 7 was played; 1999 was the last postseason to not feature a Game 7.\n\nThe San Antonio Spurs became the first defending champions to be eliminated in the first round since the 2011\u201312 Dallas Mavericks. This was only the second time it had happened since 2000.\n\nWith the Spurs being eliminated in the first round, none of the eight teams remaining at the beginning of the Conference Semifinals had previously won an NBA championship in the 21st century. After the first round of the playoffs, of the teams who had previously won an NBA championship, the Chicago Bulls had the shortest drought at 17 years, having most recently won an NBA championship in 1998, while the Atlanta Hawks had the longest overall drought at 57 years, having won their only previous championship in 1958 when the franchise was based in St. Louis.\n\nFor the first time since 1970, the Hawks made the Conference Finals (then called the Division Finals). Since 1970, they had lost all 15 Division or Conference Semifinal series they participated in. The Warriors made their first conference finals appearance since 1976, and the Houston Rockets made their first conference finals appearance since 1997. These three were the NBA teams which had been waiting for the longest time for a return to the conference finals.\n\nFor the second straight year, the No. 1 seed faced the No. 2 seed in the Conference Finals, and for the fourth time since 2000.\n\nWith the Spurs being eliminated in the first round, none of the eight teams remaining at the beginning of the Conference Semifinals had previously won an NBA championship in the 21st century. After the first round of the playoffs, of the teams who had previously won an NBA championship, the Chicago Bulls had the shortest drought at 17 years, having most recently won an NBA championship in 1998, while the Atlanta Hawks had the longest overall drought at 57 years, having won their only previous championship in 1958 when the franchise was based in St. Louis.\n\nFor the first time since 1970, the Hawks made the Conference Finals (then called the Division Finals). Since 1970, they had lost all 15 Division or Conference Semifinal series they participated in. The Warriors made their first conference finals appearance since 1976, and the Houston Rockets made their first conference finals appearance since 1997. These three were the NBA teams which had been waiting for the longest time for a return to the conference finals.\n\nFor the second straight year, the No. 1 seed faced the No. 2 seed in the Conference Finals, and for the fourth time since 2000.\n\nIn the second round, all teams that held a 2\u20131 series lead within the first three games of their respective series had gone on to lose that series.\n\nThe Rockets became only the second franchise to twice come back from 3\u20131 series deficits to win the series by defeating the Los Angeles Clippers in the Semifinals. They had first achieved that goal 20 years ago against the Phoenix Suns. The Boston Celtics are the only other franchise to twice make this comeback, doing it in 1968 and 1981. Overall, eleven teams have achieved the feat, with the Warriors doing it in the Conference Finals and Cavaliers doing it in the NBA Finals the year after.\n\nFor the first time in NBA playoff history, both conference finals teams, the Warriors of the West and the Cavaliers of the East, held commanding 3\u20130 series leads. Cleveland went on to the finals, sweeping the Atlanta Hawks 4\u20130 while Golden State won their series 4\u20131 defeating the Houston Rockets.\n\nFor the first time since the inaugural Basketball Association of America season in 1946\u201347, two rookie coaches, David Blatt of the Cavaliers and Steve Kerr of the Warriors, met each other in the NBA finals.", "doc_id": "8e7797ce-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Line_2_(Sound_Transit)", "document": "The East Link Extension, also known as Line 2 (officially the 2 Line), is a future light rail line serving the Eastside region of the Seattle metropolitan area in the U.S. state of Washington. It will be part of Sound Transit's Link light rail system, running 14 miles (23 km) from west to east and serving 10 stations in Downtown Seattle, Mercer Island, Bellevue, and Redmond. Line 2 is scheduled to open in 2024 and will continue into the Downtown Seattle Transit Tunnel and share stations with Line 1. A 3.7-mile-long (6.0 km) extension to Downtown Redmond with two additional stations is scheduled to open in 2025.\n\nThe East Link project was approved by voters in the 2008 Sound Transit 2 ballot measure, with construction costs projected at $3.7 billion. The line will use the Homer M. Hadley Memorial Bridge, one of the Interstate 90 floating bridges, which was constructed in 1989 with the intent to convert its reversible express lanes to light rail. Early transit plans from the 1960s proposed an Eastside rail system, but preliminary planning on the system did not begin until Sound Transit's formation in the early 1990s. The proposed alignment of the East Link project was debated by the Bellevue city council in the early 2010s, with members split on two different routes south of downtown Bellevue; city funding for the downtown segment's tunnel was also debated and ultimately included in the final agreement. The alignment was finalized in 2013, after more than two years of debate, and delayed the beginning of construction to 2016 and the completion of the project from 2021 to 2023. The line will be the world's first railway constructed on a floating bridge and is expected to carry 50,000 daily riders by 2030.\n\nIn 2005, WSDOT conducted a live load test on the Interstate 90 floating bridge using 65-foot (20 m) flatbed trucks carrying concrete weights to simulate the weight of light rail trains and test its performance. Using the results, which matched those from an earlier computer simulation, WSDOT concluded that the bridge could carry the weight of light rail trains after minor changes to sections of the transition spans were made during construction. Sound Transit later determined in an engineering study that rail joints could be designed to accommodate the multi-directional movement of the floating bridge, with special design considerations and speed restrictions. In 2008, the state legislature's Joint Transportation Committee commissioned an independent review of potential issues that would arise with light rail operations on the floating bridge. The panel identified 23 issues, including stray currents from the electrical system that could cause corrosion, the weight of the tracks and catenary on top of the deck, the design of the expansion joints, and a needed seismic upgrade for the bridge. The panel recommended several mitigation measures for the identified issues, which were accepted for consideration by Sound Transit, and gave the preliminary go-ahead on the project.\n\nSound Transit authorized a $53 million budget for preliminary engineering work on the floating bridge segment in 2011, contracting out to a team led by Parsons Brinckerhoff and Balfour Beatty. Preliminary design on the track bridge system to be used over the bridge's expansion joints was completed in early 2012, following the development of computer models and prototypes tested at the University of Washington. A 5,000-foot-long (1,500 m) replica of the bridge's light rail tracks, complete with an electrified overhead line, was built for field testing at the Transportation Technology Center in Pueblo, Colorado, using two light rail vehicles from Central Link. The track bridge system was designed to accommodate the bridge's six ranges of motion, changes in lake level, and allow for trains to operate at the full speed of 55 miles per hour (89 km/h). The 43-foot-long (13 m) track bridges consist of curved steel platforms placed under the tracks, connected to the railroad ties by pivoting bearings that move independently of the tracks, allowing them to remain parallel; the pivoting bearings would also stabilize the railroad ties during an earthquake, moving slightly apart to accommodate the seismic waves. Under the steel platforms, a series of flexible bearings would allow for the tracks to rise and fall by up to 3.6 inches (9.1 cm) while following the motion of the bridge deck. Trains would be halted from the bridge in the event of a major windstorm, with gusts of 40 miles per hour (64 km/h) from the north or 50 miles per hour (80 km/h) from the south. The design of the system, which would make East Link the first railway over a floating bridge ever constructed, was recognized by Popular Science magazine in their 2017 \"Best of What's New\" awards. The design of the seismic system and steel frames to be installed inside the floating pontoons added $225 million in construction costs, increasing the construction budget by 46 percent, and was paid for using contingency funds.\n\nThe use of the floating bridge for light rail service remained controversial after the passage of Sound Transit 2 in 2008. Bellevue developer Kemper Freeman filed a lawsuit against the state government in 2009, arguing that the 18th amendment of the state constitution prohibited the use of the gas tax-funded bridge for non-road uses. The case was argued before the state supreme court, who ruled in April 2011 that the case should be heard in a lower court first. Days later, Freeman re-filed the lawsuit in the Kittitas County Superior Court, naming Governor Christine Gregoire and Secretary of Transportation Paula Hammond as defendants. A judge in the Kittitas court issued a summary judgement in favor of Sound Transit and WSDOT, effectively halting the lawsuit. A third lawsuit was filed by Freeman in the state supreme court, where a 7\u20132 decision in September 2013 deemed that the conversion of the express lanes for light rail was not unconstitutional.", "doc_id": "8e7798f0-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Rammstein", "document": "Rammstein (German for ramming stone) is a German Neue Deutsche H\u00e4rte band formed in Berlin in 1994. The band's lineup\u2014consisting of lead vocalist Till Lindemann, lead guitarist Richard Kruspe, rhythm guitarist Paul Landers, bassist Oliver Riedel, drummer Christoph Schneider, and keyboardist Christian \"Flake\" Lorenz\u2014has remained unchanged throughout their history, along with their approach to songwriting, which consists of Lindemann writing and singing the lyrics over instrumental pieces the rest of the band has completed beforehand. Prior to their formation, some members were associated with the punk rock acts Feeling B and First Arsch.\n\nAfter winning a local contest, Rammstein was able to record demos and send them to different record labels, eventually signing with Motor Music. Working with producer Jacob Hellner, they released their debut album Herzeleid in 1995. Though the album initially sold poorly, the band gained popularity through their live performances and the album eventually reached No. 6 in Germany. Their second album, Sehnsucht, was released in 1997 and debuted at No. 1 in Germany, resulting in a worldwide tour lasting nearly four years and spawning the successful singles \"Engel\" and \"Du hast\" and the live album Live aus Berlin (1999). Following the tour, Rammstein signed with major label Universal Music and released Mutter in 2001. Six singles were released from the album, all charting in countries throughout Europe. The lead single, \"Sonne\", reached No. 2 in Germany. Rammstein released Reise, Reise in 2004 and had two more singles reach No. 2 in Germany: \"Mein Teil\" and \"Amerika\"; the former song reached No. 1 in Spain, becoming their first No. 1 single.\n\nTheir fifth album, Rosenrot, was released in 2005, and the lead single, \"Benzin\", reached No. 6 in Germany. Their second live album, V\u00f6lkerball, was released in 2006. The band released their sixth album, Liebe ist f\u00fcr alle da, in 2009, with its lead single, \"Pussy\", becoming their first No. 1 hit in Germany despite having a controversial music video that featured hardcore pornography. The band then entered a recording hiatus and toured for several years, releasing the Made in Germany greatest hits album as well as the Rammstein in Amerika and Paris live albums. After a decade without new music, Rammstein returned in 2019 with the song \"Deutschland\", which became their second No. 1 hit in Germany. Their untitled seventh studio album was released in May 2019 and reached No. 1 in 14 countries. While sheltering during COVID-19 lockdowns, the band spontaneously wrote and recorded their eighth studio album, Zeit, which was released in April 2022.\n\nRammstein was one of the first bands to emerge within the Neue Deutsche H\u00e4rte genre, with their debut album leading the music press to coin the term, and their style of music has generally had a positive reception from music critics. Commercially, the band has been very successful, earning many No. 1 albums as well as gold and platinum certifications in countries around the world. Their grand live performances, which often feature pyrotechnics, have contributed to the growth in their popularity. Despite success, the band has been subject to some controversies, with their overall image having been criticized; for instance, the song \"Ich tu dir weh\" forced its parent album Liebe ist f\u00fcr alle da to be re-released in Germany with the song removed due to its sexually explicit lyrics.\n\nIn 1989, East German guitarist Richard Kruspe escaped to West Berlin and started the band Orgasm Death Gimmick. At that time, he was heavily influenced by US music, especially that of rock group Kiss. After the Berlin Wall came down, he moved back home to Schwerin, where Till Lindemann worked as a basket-weaver and played drums in the band First Arsch (loosely translated as \"First Arse\" or \"First Ass\"). At this time, Kruspe lived with Oliver Riedel of the Inchtabokatables and Christoph Schneider of Die Firma.\n\nIn 1992, Kruspe made his first trip to the United States with Till Lindemann and Oliver \"Ollie\" Riedel. He realized that he did not want to make US music and concentrated on creating a unique German sound. Kruspe, Riedel and Schneider started working together on a new project in 1993. Finding it difficult to write both music and lyrics, Kruspe persuaded Lindemann, whom he had overheard singing while he was working, to join the fledgling group. The band called themselves Rammstein-Flugschau (Rammstein Airshow) after the 1988 Ramstein air show disaster. Guitarist Paul Landers said the spelling of Ramstein with the extra \"m\" was a mistake. After the band became popular, the band members denied the connection to the air show disaster and said that their name was inspired by the giant doorstop-type devices found on old gates, called Rammsteine. The extra \"m\" in the band's name makes it translate literally as \"ramming stone\". In a 2019 feature, Metal Hammer explained that the band was named after one of their earliest songs, \"Ramstein\", written after the air show disaster at the American airbase in Ramstein. According to the band, people started to refer to them as \"the band with the 'Ramstein song'\" and later as the \"Ramstein band\".\n\nRammstein co-existed with the members' previous projects for about a year and a half. Members would invest the money raised with Feeling B shows in Rammstein. They recorded their first songs in a building that had been squatted by Feeling B frontman Aljoscha Rompe. A contest was held in Berlin for amateur bands in 1994, the winner of which would receive access to a professional recording studio for a whole week. Kruspe, Riedel, Schneider, and Lindemann entered and won the contest with a 4-track demo tape with demo versions of songs from Herzeleid, written in English. This sparked Landers' attention, who wanted in on the project upon hearing their demo. To complete their sound, Rammstein attempted to recruit Christian \"Flake\" Lorenz, who had played with Landers in Feeling B. Though initially hesitant, Lorenz eventually agreed to join the band. Later, Rammstein were signed by Motor Music.\n\nRammstein began to record their first studio album, Herzeleid, in March 1995 with producer Jacob Hellner. They released their first single \"Du riechst so gut\" that August and released the album in September. Later that year, they toured with Clawfinger in Warsaw and Prague. Rammstein headlined a 17-show tour of Germany in December, which helped boost the band's popularity and establish them as a credible live act. They went on several tours throughout early 1996, releasing their second single titled \"Seemann\" on 8 January. On 27 March 1996, Rammstein performed on MTV's Hanging Out in London, their first performance in the UK. Their first major boost in popularity outside Germany came when Nine Inch Nails frontman Trent Reznor chose two Rammstein songs, \"Heirate mich\" and \"Rammstein\", during his work as music director for the David Lynch 1997 film Lost Highway. The soundtrack for the film was released in the U.S. in late 1996 and later throughout Europe in April 1997. In the middle of 1996, they headlined one tour of their own in small, sold-out venues. Rammstein went on to tour through Germany, Austria, and Switzerland from September to October 1996, performing an anniversary concert on 27 September called \"100 years of Rammstein\". Guests to the concert included Moby, Bobo, and the Berlin Session Orchestra, while Berlin director Gert Hof was responsible for the light show.", "doc_id": "8e779a9e-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Streetcars_in_North_America", "document": "Streetcars or trolley(car)s (North American English for the European word tram) were once the chief mode of public transit in hundreds of North American cities and towns. Most of the original urban streetcar systems were either dismantled in the mid-20th century or converted to other modes of operation, such as light rail. Today, only Toronto still operates a streetcar network essentially unchanged in layout and mode of operation.\n\nOlder surviving lines and systems in Boston, Cleveland, Mexico City, Newark, Philadelphia, Pittsburgh, and San Francisco were often infrastructure-heavy systems with tunnels, dedicated right-of-way, and long travel distances, or have largely rebuilt their streetcar systems as light rail systems. About 22 North American cities, starting with Edmonton, Calgary and San Diego, have installed new light rail systems, some of which run along historic streetcar corridors. A few recent cases feature mixed-traffic street-running operation like a streetcar. Portland, Oregon, Seattle, and Salt Lake City have built both modern light rail and modern streetcar systems, while Tucson, Oklahoma City and Atlanta have built new modern streetcar lines. A few other cities and towns have restored a small number of lines to run heritage streetcars either for public transit or for tourists; many are inspired by New Orleans' St. Charles Streetcar Line, generally viewed as the world's oldest continuously operating streetcar line.\n\nFrom the 1820s to the 1880s urban transit in North America began when horse-drawn omnibus lines started to operate along city streets. Examples included Gilbert Vanderwerken's 1826 omnibus service in Newark, New Jersey. Before long Omnibus companies sought to boost profitability of their wagons by increasing ridership along their lines. Horsecar lines simply ran wagons along rails set in a city street instead of on the unpaved street surface as the omnibus lines used. When a wagon was drawn upon rails the rolling resistance of the vehicle was lowered and the average speed was increased.\n\nA horse or team that rode along rails could carry more fare paying passengers per day of operation than those that did not have rails. North America's first streetcar lines opened in 1832 from downtown New York City to Harlem by the New York and Harlem Railroad, in 1834 in New Orleans, and in 1849 in Toronto along the Williams Omnibus Bus Line.\n\nThese streetcars used horses and sometimes mules. Mules were thought to give more hours per day of useful transit service than horses and were especially popular in the south in cities such as New Orleans, Louisiana. In many cities, streetcars drawn by a single animal were known as \"bobtail streetcars\" whether mule-drawn or horse-drawn. By the mid-1880s, there were 415 street railway companies in the U.S. operating over 6,000 miles (9,700 km) of track and carrying 188 million passengers per year using animal-drawn cars. In the nineteenth century Mexico had streetcars in around 1,000 towns and many were animal-powered. The 1907 Anuario Estad\u00edstico lists 35 animal-powered streetcar lines in Veracruz state, 80 in Guanajuato, and 300 lines in Yucat\u00e1n.\n\nAlthough most animal-drawn lines were shut down in the 19th century, a few lines lasted into the 20th century and later. Toronto's horse-drawn streetcar operations ended in 1891. New York City saw regular horsecar service last until 1917. In Pittsburgh, Pennsylvania, the Sarah Street line lasted until 1923. The last regular mule-drawn cars in the United States ran in Sulphur Rock, Arkansas, until 1926 and were commemorated by a U.S. Postage Stamp issued in 1983. The last mule tram service in Mexico City ended in 1932, and a mule-powered line in Celaya, survived until May 1954.\n\nIn the 21st century, horsecars are still used to take visitors along the 9-kilometre (5.6 mi) tour of the 3 cenotes from Chunkan\u00e1n near Cuzam\u00e1 Municipality in the state of Yucat\u00e1n. Disneyland theme park in Anaheim, Cal., has operated a short horsecar line since it opened in July 1955. Similarly, Disney World theme park in Orlando has operated a short horsecar line since it opened in Oct 1971. At both parks, they run from 8-9am to 1:30-2pm, and, depending on the season, sometimes 5-7pm.\n\nBy 1889 110 electric railways incorporating Sprague's equipment had been started or were planned on several continents. By 1895 almost 900 electric street railways and nearly 11,000 miles (18,000 km) of track had been built in the United States.\n\nThe rapid growth of streetcar systems led to the widespread ability of people to live outside of a city and commute into it for work on a daily basis. Several of the communities that grew as a result of this new mobility were known as streetcar suburbs. Another outgrowth of the popularity of urban streetcar systems was the rise of interurban lines, which were basically streetcars that operated between cities and served remote, even rural, areas. In some areas interurban lines competed with regular passenger service on mainline railroads and in others they simply complemented the mainline roads by serving towns not on the mainlines. The largest of these was the Pacific Electric system in Los Angeles, which had over 1,000 miles (1,600 km) of track and 2,700 scheduled services each day.\n\nThe Hagerstown and Frederick Railway that started in 1896 in northern Maryland was built to provide transit service to resorts and the streetcar company built and operated two amusement parks to entice more people to ride their streetcars. The Lake Shore Electric Railway interurban in northern Ohio carried passengers to Cedar Point and several other Ohio amusement parks. The Lake Compounce amusement park, which started in 1846, had by 1895 established trolley service to its rural Connecticut location. Although outside trolley service to Lake Compounce stopped in the 1930s, the park resurrected its trolley past with the opening of the \"Lakeside Trolley\" ride in 1997 which is still operating today as a short heritage line. In the days before widespread radio listening was popular and in towns or neighborhoods too small to support a viable amusement park streetcar lines might help to fund an appearance of a touring musical act at the local bandstand to boost weekend afternoon ridership.\n\nMany of Mexico's streetcars were fitted with gasoline motors in the 1920s and some were pulled by steam locomotives. Only 15 Mexican streetcar systems were electrified in the 1920s.", "doc_id": "8e779bc0-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/East_Carolina_Pirates_football", "document": "The East Carolina Pirates are a college football team that represents East Carolina University (variously \"East Carolina\" or \"ECU\"). The team is a member of the American Athletic Conference, which is in Division I Football Bowl Subdivision (formerly Division I-A) of the National Collegiate Athletic Association (NCAA). Mike Houston is the head coach.\n\nThe Pirates have won seven conference championships and nine bowl games. The Pirates have 20 All-Americans over its history. Four players have their jerseys retired. Numerous Pirates have played in the NFL, including ten current players.\n\nThe team played its inaugural season in 1932. The team played home games at College Stadium on the main campus from the 1949 to the 1962 season. With the exception of the 1999 Miami football game, they have played their home games at Dowdy\u2013Ficklen Stadium every year since 1963. The stadium is located south of East Carolina's main campus near the intersection of South Charles Boulevard and 14th Street. Dowdy-Ficklen underwent an expansion in 2010, raising the capacity of the stadium to 50,000. The Pirates announced a $55 million renovation project to Dowdy-Ficklen in 2016, which will add a new tower above the south side stands, among other things.\n\nThe coaches and administrative support is located in the Ward Sports Medicine Building, which is located adjacent to the stadium. Strength and conditioning for the players occurs in the Murphy Center, a $13 million indoor training facility which was completed in June 2002 and which is located in the west end zone of Dowdy\u2013Ficklen Stadium. The Pirates also practice and train at the Cliff Moore Practice Facility, which was fully renovated in 2005 and which has two full-length NFL-caliber fields.\n\nReplacing Logan as the Pirates head coach was Florida defensive coordinator John Thompson. Thompson came to ECU with a great resume as an assistant coach and a reputation as a brilliant defensive mind, working under Lou Holtz at Arkansas, Joe Raymond Peace at Louisiana Tech, Curley Hallman and Jeff Bower at Southern Miss, Houston Nutt at Arkansas and Ron Zook at Florida.\n\nCoach Thompson's tenure set the Pirates back several years, accumulating only three wins over two years, with records of 1\u201311 in 2003 and 2\u20139 in 2004. His teams beat only Army both years and Tulane his second year. Amid much fan and administration impatience and frustration with the struggles of the football program, athletics director Terry Holland fired Thompson after the 2004 season. Thompson left with an abysmal 3\u201320 record.\n\nIn December 2004, Holland brought in former UConn head coach Skip Holtz, son of legendary coach Lou Holtz, to become the Pirates nineteenth head football coach.\n\nIn his first season, Coach Holtz helped turn the team around winning five games, two more wins than the John Thompson had accomplished in his entire tenure. His second season marked the Pirates first winning season since 2000, winning seven games, and East Carolina was bowl-eligible for the first time since the 2001 season. The 2006 team had notable wins over Virginia, Southern Miss, Central Florida and North Carolina State. A loss to Rice in the last conference game of the year kept the Pirates out of the Conference USA Championship Game. For the teams winning season, the newly created Papajohns.com Bowl invited the team to play in their contest, where East Carolina lost to former C-USA rival South Florida, 24\u20137.\n\nIn 2007, Holtz' Pirates continued their winning ways. The team won eight regular season games, earning the team their second bowl game in two years. The Pirates played the Boise State in the Hawai'i Bowl, defeating the Broncos by a score of 41\u201338. The Hawaii Bowl win marked the first for the Pirates since the Galleryfurniture.com Bowl win against Texas Tech in 2000.\n\nOn August 30, 2008 the Pirates pulled off a stunning upset against then 17th ranked Virginia Tech 27\u201322 on a late blocked punt returned for a touchdown by senior wide receiver T.J. Lee. The following week they pulled off an even stronger upset of then 8th ranked West Virginia by the score of 24\u20133, not allowing a touchdown for the entire game. This was the Pirates third straight victory against a top-25 ranked opponent, counting Boise State from the year before. As a result, East Carolina was awarded with the number 14 ranking in the Associated Press poll and 20th in the USA Today poll, the highest since January 1992 when the Pirates were ranked ninth. The Pirates finished the 2008 regular season at 9-5, winning the Eastern Division of Conference USA and defeating Tulsa in the Championship game. This was the first Conference Championship for ECU since 1976. ECU was then invited to the Auto Zone Liberty bowl to face Kentucky, where the Pirates controlled the first half, but fell to UK 25-19. The next season, East Carolina produced a second Conference USA title with a 38-32 win over Houston, and finished the season at 9\u20135 after an overtime loss to the Arkansas in the Liberty Bowl.\n\nOn January 14, 2010, it was announced that Holtz was leaving his position at East Carolina to take the head football coach position at the South Florida, replacing the recently fired Jim Leavitt. Holtz left ECU with a 38\u201327 record.", "doc_id": "8e779ce2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Music_industry", "document": "The music industry consists of the individuals and organizations that earn money by writing songs and musical compositions, creating and selling recorded music and sheet music, presenting concerts, as well as the organizations that aid, train, represent and supply music creators. Among the many individuals and organizations that operate in the industry are: the songwriters and composers who write songs and musical compositions; the singers, musicians, conductors, and bandleaders who perform the music; the record labels, music publishers, recording studios, music producers, audio engineers, retail and digital music stores, and performance rights organizations who create and sell recorded music and sheet music; and the booking agents, promoters, music venues, road crew, and audio engineers who help organize and sell concerts.\n\nThe industry also includes a range of professionals who assist singers and musicians with their music careers. These include talent managers, artists and repertoire managers, business managers, entertainment lawyers; those who broadcast audio or video music content (satellite, Internet radio stations, broadcast radio and TV stations); music journalists and music critics; DJs; music educators and teachers; musical instrument manufacturers; as well as many others. In addition to the businesses and artists there are organizations that also play an important role, including musician's unions (e.g. American Federation of Musicians), not-for-profit performance-rights organizations (e.g. American Society of Composers, Authors and Publishers) and other associations (e.g. International Alliance for Women in Music, a non-profit organization that advocates for women composers and musicians).\n\nThe modern Western music industry emerged between the 1930s and 1950s, when records replaced sheet music as the most important product in the music business. In the commercial world, \"the recording industry\"\u2014a reference to recording performances of songs and pieces and selling the recordings\u2013began to be used as a loose synonym for \"the music industry\". In the 2000s, a majority of the music market is controlled by three major corporate labels: the French-owned Universal Music Group, the Japanese-owned Sony Music Entertainment,[1] and the US-owned Warner Music Group. Labels outside of these three major labels are referred to as independent labels (or \"indies\"). The largest portion of the live music market for concerts and tours is controlled by Live Nation, the largest promoter and music venue owner. Live Nation is a former subsidiary of iHeartMedia Inc, which is the largest owner of radio stations in the United States.\n\nIn the first decades of the 2000s, the music industry underwent drastic changes with the advent of widespread digital distribution of music via the Internet (which includes both illegal file sharing of songs and legal music purchases in online music stores). A conspicuous indicator of these changes is total music sales: since 2000, sales of recorded music have dropped off substantially while live music has increased in importance. In 2011, the largest recorded music retailer in the world was now a digital, Internet-based platform operated by a computer company: Apple Inc.'s online iTunes Store. Since 2011, the music industry has seen consistent sales growth with streaming now generating more revenue per year than digital downloads. Spotify, Apple Music, and Amazon Music are the largest streaming services by subscriber count.\n\nMusic publishing using machine-printed sheet music developed during the Renaissance music era in the mid-15th century. The development of music publication followed the evolution of printing technologies that were first developed for printing regular books. After the mid-15th century, mechanical techniques for printing sheet music were first developed. The earliest example, a set of liturgical chants, dates from about 1465, shortly after the Gutenberg Bible was printed. Prior to this time, music had to be copied out by hand. To copy music notation by hand was a very costly, labor-intensive, and time-consuming process, so it was usually undertaken only by monks and priests seeking to preserve sacred music for the church. The few collections of secular (non-religious) music that are extant were commissioned and owned by wealthy aristocrats. Examples include the Squarcialupi Codex of Italian Trecento music and the Chantilly Codex of French Ars subtilior music.\n\nThe use of printing enabled sheet music to reproduced much more quickly and at a much lower cost than hand-copying music notation. This helped musical styles to spread to other cities and countries more quickly, and it also enabled music to be spread to more distant areas. Before the invention of music printing, a composer's music might only be known in the city she lived in and its surrounding towns, because only wealthy aristocrats would be able to afford to have hand copies made of her music. With music printing, though, a composer's music could be printed and sold at a relatively low cost to purchasers from a wide geographic area. As sheet music of major composer's pieces and songs began to be printed and distributed in a wider area, this enabled composers and listeners to hear new styles and forms of music. A German composer could buy songs written by an Italian or English composer, and an Italian composer could buy pieces written by Dutch composers and learn how they wrote music. This led to more blending of musical styles from different countries and regions.\n\nAt the dawn of the early 20th century, the development of sound recording began to function as a disruptive technology to the commercial interests which published sheet music. During the sheet music era, if a regular person wanted to hear popular new songs, he or she would buy the sheet music and play it at home on a piano, or learn the song at home while playing the accompaniment part on piano or guitar. Commercially released phonograph records of musical performances, which became available starting in the late 1880s, and later the onset of widespread radio broadcasting, starting in the 1920s, forever changed the way music was heard and listened to. Opera houses, concert halls, and clubs continued to produce music and musicians and singers continued to perform live, but the power of radio allowed bands, ensembles and singers who had previously performed only in one region to become popular on a nationwide and sometimes even a worldwide scale. Moreover, whereas attendance at the top symphony and opera concerts was formerly restricted to high-income people in a pre-radio world, with broadcast radio, a much larger wider range of people, including lower and middle-income people could hear the best orchestras, big bands, popular singers and opera shows.\n\nThe \"record industry\" eventually replaced the sheet music publishers as the music industry's largest force. A multitude of record labels came and went. Some noteworthy labels of the earlier decades include the Columbia Records, Crystalate, Decca Records, Edison Bell, The Gramophone Company, Invicta, Kalliope, Path\u00e9, Victor Talking Machine Company and many others. Many record companies died out as quickly as they had formed, and by the end of the 1980s, the \"Big six\" \u2014 EMI, CBS, BMG, PolyGram, WEA and MCA \u2014 dominated the industry. Sony bought CBS Records in 1987 and changed its name to Sony Music in 1991. In mid-1998, PolyGram Music Group merged with MCA Music Entertainment creating what we now know as Universal Music Group. Since then, Sony and BMG merged in 2004, and Universal took over the majority of EMI's recorded music interests in 2012. EMI Music Publishing, also once part of the now defunct British conglomerate, is now co-owned by Sony as a subsidiary of Sony/ATV Music Publishing. As in other industries, the record industry is characterised by many mergers and/or acquisitions, for the major companies as well as for middle sized business (recent example is given by the Belgium group PIAS and French group Harmonia Mundi).\n\nGenre-wise, music entrepreneurs expanded their industry models into areas like folk music, in which composition and performance had continued for centuries on an ad hoc self-supporting basis. Forming an independent record label, or \"indie\" label, or signing to such a label continues to be a popular choice for up-and-coming musicians, especially in genres like hardcore punk and extreme metal, even though indies cannot offer the same financial backing of major labels. Some bands prefer to sign with an indie label, because these labels typically give performers more artistic freedom.", "doc_id": "8e779e7c-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Transformer", "document": "A transformer is a passive component that transfers electrical energy from one electrical circuit to another circuit, or multiple circuits. A varying current in any coil of the transformer produces a varying magnetic flux in the transformer's core, which induces a varying electromotive force (EMF) across any other coils wound around the same core. Electrical energy can be transferred between separate coils without a metallic (conductive) connection between the two circuits. Faraday's law of induction, discovered in 1831, describes the induced voltage effect in any coil due to a changing magnetic flux encircled by the coil.\n\nTransformers are used to change AC voltage levels, such transformers being termed step-up or step-down type to increase or decrease voltage level, respectively. Transformers can also be used to provide galvanic isolation between circuits as well as to couple stages of signal-processing circuits. Since the invention of the first constant-potential transformer in 1885, transformers have become essential for the transmission, distribution, and utilization of alternating current electric power.[2] A wide range of transformer designs is encountered in electronic and electric power applications. Transformers range in size from RF transformers less than a cubic centimeter in volume, to units weighing hundreds of tons used to interconnect the power grid.\n\nAn ideal transformer is linear, lossless and perfectly coupled. Perfect coupling implies infinitely high core magnetic permeability and winding inductance and zero net magnetomotive force. A varying current in the transformer's primary winding creates a varying magnetic flux in the transformer core, which is also encircled by the secondary winding. This varying flux at the secondary winding induces a varying electromotive force or voltage in the secondary winding. This electromagnetic induction phenomenon is the basis of transformer action and, in accordance with Lenz's law, the secondary current so produced creates a flux equal and opposite to that produced by the primary winding.\n\nThe windings are wound around a core of infinitely high magnetic permeability so that all of the magnetic flux passes through both the primary and secondary windings. With a voltage source connected to the primary winding and a load connected to the secondary winding, the transformer currents flow in the indicated directions and the core magnetomotive force cancels to zero.\n\nAccording to Faraday's law, since the same magnetic flux passes through both the primary and secondary windings in an ideal transformer, a voltage is induced in each winding proportional to its number of windings. The transformer winding voltage ratio is equal to the winding turns ratio.\n\nAn ideal transformer is a reasonable approximation for a typical commercial transformer, with voltage ratio and winding turns ratio both being inversely proportional to the corresponding current ratio.\n\nThe load impedance referred to the primary circuit is equal to the turns ratio squared times the secondary circuit load impedance.\n\nThe ideal transformer model assumes that all flux generated by the primary winding links all the turns of every winding, including itself. In practice, some flux traverses paths that take it outside the windings. Such flux is termed leakage flux, and results in leakage inductance in series with the mutually coupled transformer windings. Leakage flux results in energy being alternately stored in and discharged from the magnetic fields with each cycle of the power supply. It is not directly a power loss, but results in inferior voltage regulation, causing the secondary voltage not to be directly proportional to the primary voltage, particularly under heavy load. Transformers are therefore normally designed to have very low leakage inductance.\n\nIn some applications increased leakage is desired, and long magnetic paths, air gaps, or magnetic bypass shunts may deliberately be introduced in a transformer design to limit the short-circuit current it will supply. Leaky transformers may be used to supply loads that exhibit negative resistance, such as electric arcs, mercury- and sodium- vapor lamps and neon signs or for safely handling loads that become periodically short-circuited such as electric arc welders.\n\nAir gaps are also used to keep a transformer from saturating, especially audio-frequency transformers in circuits that have a DC component flowing in the windings. A saturable reactor exploits saturation of the core to control alternating current.\n\nKnowledge of leakage inductance is also useful when transformers are operated in parallel. It can be shown that if the percent impedance [e] and associated winding leakage reactance-to-resistance (X/R) ratio of two transformers were the same, the transformers would share the load power in proportion to their respective ratings. However, the impedance tolerances of commercial transformers are significant. Also, the impedance and X/R ratio of different capacity transformers tends to vary.", "doc_id": "8e779f58-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/North_China_Craton", "document": "The North China Craton is a continental crustal block with one of Earth's most complete and complex records of igneous, sedimentary and metamorphic processes. It is located in northeast China, Inner Mongolia, the Yellow Sea, and North Korea. The term craton designates this as a piece of continent that is stable, buoyant and rigid. Basic properties of the cratonic crust include being thick (around 200 km), relatively cold when compared to other regions, and low density. The North China Craton is an ancient craton, which experienced a long period of stability and fitted the definition of a craton well. However, the North China Craton later experienced destruction of some of its deeper parts (decratonization), which means that this piece of continent is no longer as stable.\n\nThe North China Craton was at first some discrete, separate blocks of continents with independent tectonic activities. In the Paleoproterozoic (2.5-1.8 billion years ago) the continents collided and amalgamated and interacted with the supercontinent, creating belts of metamorphic rocks between the formerly separate parts. The exact process of how the craton was formed is still under debate. After the craton was formed, it stayed stable until the middle of the Ordovician period (480 million years ago). The roots of the craton were then destabilised in the Eastern Block and entered a period of instability. The rocks formed in the Archean and Paleoproterozoic eons (4.6\u20131.6 billion years ago) were significantly overprinted during the root destruction. Apart from the records of tectonic activities, the craton also contains important mineral resources, such as iron ores and rare earth elements, and fossils records of evolutionary development.\n\nThe North China Craton covers approximately 1,500,000 km2 in area and its boundaries are defined by several mountain ranges (orogenic belts), the Central Asian Orogenic Belt to the north, the Qilianshan Orogen to the west, Qinling Dabie Orogen to the south and Su-Lu Orogen to the east. The intracontinental orogen Yan Shan belt ranges from east to west in the northern part of the craton.\n\nThe North China Craton consists of two blocks, the Western Block and the Eastern Block, separated by the 100\u2013300 km wide Trans North China Orogen, which is also called Central Orogenic Belt or Jinyu Belt. The Eastern Block covers areas including southern Anshan-Benxi, eastern Hebei, southern Jilin, northern Liaoning, Miyun-Chengdu and western Shandong. Tectonic activities, such as earthquakes, increased since craton root destruction started in the Phanerozoic. The Eastern Block is defined by high heat flow, thin lithosphere and a lot of earthquakes. It experienced a number of earthquakes with a magnitude of over 8 on the Richter scale, claiming millions of lives. The thin mantle root, which is the lowest part of lithosphere, is the reason for its instability. The thinning of the mantle root caused the craton to destabilize, weakening the seismogenic layer, which then allows earthquakes to happen in the crust. The Eastern Block may once have had a thick mantle root, as shown by xenolith evidence, but this seems to have been thinned during the Mesozoic. The Western Block is located in Helanshan-Qianlishan, Daqing-Ulashan, Guyang-Wuchuan, Sheerteng and Jining. It is stable because of the thick mantle root. Little internal deformation occurred here since Precambrian.\n\nThe rocks in the North China craton consist of Precambrian (4.6 billion years ago to 539 million years ago) basement rocks, with the oldest zircon dated 4.1 billion years ago and the oldest rock dated 3.8 billion years ago. The Precambrian rocks were then overlain by Phanerozoic (539 million years ago to present) sedimentary rocks or igneous rocks. The Phanerozoic rocks are largely not metamorphosed. The Eastern Block is made up of early to late Archean (3.8-3.0 billion years ago) tonalite-trondhjemite-granodiorite gneisses, granitic gneisses, some ultramafic to felsic volcanic rocks and metasediments with some granitoids which formed in some tectonic events 2.5 billion years ago. These are overlain by Paleoproterozoic rocks which were formed in rift basins. The Western Block consists of an Archean (2.6\u20132.5 billion years ago) basement which comprises tonalite-trondhjemite-granodiorite, mafic igneous rock, and metamorphosed sedimentary rocks. The Archean basement is overlain unconformably by Paleoproterozoic khondalite belts, which consist of different types of metamorphic rocks, such as graphite-bearing sillimanite garnet gneiss. Sediments were widely deposited in the Phanerozoic with various properties, for example, carbonate and coal bearing rocks were formed in the late Carboniferous to early Permian (307-270 million years ago), when purple sand-bearing mudstones were formed in a shallow lake environment in the Early to Middle Triassic. Apart from sedimentation, there were six major stages of magmatism after the Phanerozoic decratonization. In Jurassic to Cretaceous (100-65 million years ago) sedimentary rocks were often mixed with volcanic rocks due to volcanic activities.\n\nThe North China Craton is very important in terms of understanding biostratigraphy and evolution. In Cambrian and Ordovician time, the units of limestone and carbonate kept a good record of biostratigraphy and therefore they are important for studying evolution and mass extinction. The North China platform was formed in early Palaeozoic. It had been relatively stable during Cambrian and the limestone units are therefore deposited with relatively few interruptions. The limestone units were deposited in underwater environment in Cambrian. It was bounded by faults and belts for example Tanlu fault. The Cambrian and Ordovician carbonate sedimentary units can be defined by six formations: Liguan, Zhushadong, Mantou, Zhangxia, Gushan, Chaomidian. Different trilobite samples can be retrieved in different strata, forming biozones. For example, lackwelderia tenuilimbata (a type of trilobite) zone in Gushan formation. The trilobite biozones can be useful to correlate and identify events in different places, like identifying unconformity sequences from a missing biozones or correlates events happening in a neighbouring block (like Tarim block).\n\nThe carbonate sequence can also be of evolutionary significance because it indicates extinction events like the biomeres in the Cambrian. Biomeres are small extinction events defined by the migration of a group of trilobite, family Olenidae, which had lived in deep sea environment. Olenidae trilobites migrated to shallow sea regions while the other trilobite groups and families died out in certain time periods. This is speculated to be due to a change in ocean conditions, either a drop in ocean temperature, or a drop in oxygen concentration. They affected the circulation and living environment for marine species. The shallow marine environment would change dramatically, resembling a deep sea environment. The deep sea species would thrive, while the other species died out. The trilobite fossils actually records important natural selection processes. The carbonate sequence containing the trilobite fossils hence important to record paleoenvironment and evolution.", "doc_id": "8e77a098-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Chennai_Metro", "document": "The Chennai Metro is a rapid transit system serving the city of Chennai, Tamil Nadu, India. It is the 4th longest metro system In india. The system commenced service in 2015 after partially opening the first phase of the project. The network consists of two colour-coded lines covering a length of 54.65 kilometres (33.96 mi). The Chennai Metro Rail Limited (CMRL), a joint venture between Government of India and the Government of Tamil Nadu built and operates the Chennai Metro. The system has a mix of underground and elevated stations and uses standard gauge. The services operate daily between 4:30 and 23:00 with a varying frequency of 5 to 14 minutes.\n\nChennai had an established Chennai Suburban Railway network that spanned from Beach to Tambaram, which dates back to 1931 and operated on a metre-gauge line. This service is now being continued after conversion to broad gauge line up to Chengalpattu. The suburban network also consists of two more suburban lines, the west bound Dr. M.G.R. Central\u2013Arakkonam suburban service and the north bound Dr. M.G.R. Central\u2013Gummidipoondi. The first phase of Chennai Mass Rapid Transit System , India's first elevated line between Chennai Beach and Thirumayilai opened in 1995 with an extension to Velachery in 2007. Modeled after the Delhi Metro, a similar modern metro rail system was planned for Chennai by Delhi Metro chief E Sreedharan due to his special interest in the Chennai city.\n\nThe 25 km long Chennai Mass Rapid Transit System is likely to be handed over to CMRL by the Southern Railway. The entire system from Mount\u2013Velachery\u2013Beach will be upgraded as a broad gauge metro with all the facilities of the metro stations which includes tracks, security, ticketing system and the rolling stock. On 11 May 2022, Southern Railway granted in-principle approval for the Chennai Metro to takeover the MRTS.\n\nThe cost for the second phase was estimated at \u20b963,000 crore (US$7.9 billion) with the project funded by the government and the lending agencies. JICA has sanctioned concessional loan amounts of \u20b98,877 crore (US$1.1 billion) for the project. Phase 2 is to be funded partially by JICA, AIIB, ADB and NDB. Further the blue line extension from Airport to kilambakkam is estimated at \u20b94,080 crore (US$510 million).\n\nChennai Metro runs in standard gauge measuring 1,435 millimetres (56.5 in) and the lines are double-tracked. The rail tracks were manufactured in Brazil and the raw material was supplied by Tata Steel. The average speed of operation is 35 kilometres per hour (22 mph) and maximum speed is 80 kilometres per hour (50 mph). Chennai Metro operates trains from 4:30 AM to 11:00 PM with a frequency of one train every 4.5 minutes in peak hours and every 15 minutes in lean hours. CMRL plans to increase the frequency to one train every 2.5 minutes once footfalls reach 600,000 passengers a day.", "doc_id": "8e77a16a-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Political_career_of_John_C._Breckinridge", "document": "The political career of John C. Breckinridge included service in the state government of Kentucky, the Federal government of the United States, as well as the government of the Confederate States of America. In 1857, 36 years old, he was inaugurated as Vice President of the United States under James Buchanan. He remains the youngest person to ever hold the office. Four years later, he ran as the presidential candidate of a dissident group of Southern Democrats, but lost the election to the Republican candidate Abraham Lincoln.\n\nA member of the Breckinridge political family, John C. Breckinridge became the first Democrat to represent Fayette County in the Kentucky House of Representatives, and in 1851, he was the first Democrat to represent Kentucky's 8th congressional district in over 20 years. A champion of strict constructionism, states' rights, and popular sovereignty, he supported Stephen A. Douglas's Kansas\u2013Nebraska Act as a means of addressing slavery in the territories acquired by the U.S. in the Mexican\u2013American War. Considering his re-election to the House of Representatives unlikely in 1854, he returned to private life and his legal practice. He was nominated for vice president at the 1856 Democratic National Convention, and although he and Buchanan won the election, he enjoyed little influence in Buchanan's administration.\n\nIn 1859, the Kentucky General Assembly elected Breckinridge to a U.S. Senate term that would begin in 1861. In the 1860 United States presidential election, Breckinridge captured the electoral votes of most of the Southern states, but finished a distant second among four candidates. Lincoln's election as President prompted the secession of the Southern states to form the Confederate States of America. Though Breckinridge sympathized with the Southern cause, in the Senate he worked futilely to reunite the states peacefully. After the Confederates fired on Fort Sumter, beginning the Civil War, he opposed allocating resources for Lincoln to fight the Confederacy. Fearing arrest after Kentucky sided with the Union, he fled to the Confederacy, joined the Confederate States Army, and was subsequently expelled from the Senate. He served in the Confederate Army from October 1861 to February 1865, when Confederate President Jefferson Davis appointed him Confederate States Secretary of War. Then, concluding that the Confederate cause was hopeless, he encouraged Davis to negotiate a national surrender. Davis's capture on May 10, 1865 effectively ended the war, and Breckinridge fled to Cuba, then Great Britain, and finally Canada, remaining in exile until President Andrew Johnson's offer of amnesty in 1868. Returning to Kentucky, he refused all requests to resume his political career and died of complications related to war injuries in 1875.\n\nSlavery issues dominated Breckinridge's political career, although historians disagree about Breckinridge's views. In Breckinridge: Statesman, Soldier, Symbol, William C. Davis argues that, by adulthood, Breckinridge regarded slavery as evil; his entry in the 2002 Encyclopedia of World Biography records that he advocated voluntary emancipation. In Proud Kentuckian: John C. Breckinridge 1821\u20131875, Frank Heck disagrees, citing Breckinridge's consistent advocacy for slavery protections, beginning with his opposition to emancipationist candidates\u2014including his uncle, Robert Jefferson Breckinridge\u2014in the state elections of 1849.\n\nBreckinridge's grandfather, John, owned slaves, believing it was a necessary evil in an agrarian economy. He hoped for gradual emancipation but did not believe the federal government was empowered to effect it; Davis wrote that this became \"family doctrine\". As a U.S. Senator, John Breckinridge insisted that decisions about slavery in Louisiana Territory be left to its future inhabitants, essentially the \"popular sovereignty\" advocated by John C. Breckinridge prior to the Civil War. John C. Breckinridge's father, Cabell, embraced gradual emancipation and opposed government interference with slavery, but Cabell's brother Robert, a Presbyterian minister, became an abolitionist, concluding that slavery was morally wrong. Davis recorded that all the Breckinridges were pleased when the General Assembly upheld the ban on importing slaves to Kentucky in 1833.\n\nJohn C. Breckinridge encountered conflicting influences as an undergraduate at Centre College and in law school at Transylvania University. Centre President John C. Young, Breckinridge's brother-in-law, believed in states' rights and gradual emancipation, as did George Robertson, one of Breckinridge's instructors at Transylvania, but James G. Birney, father of Breckinridge's friend and Centre classmate William Birney, was an abolitionist. In an 1841 letter to Robert Breckinridge, who became his surrogate father after Cabell Breckinridge's death, John C. Breckinridge wrote that only \"ignorant, foolish men\" feared abolition. In an Independence Day address in Frankfort later that year, he decried the \"unlawful dominion over the bodies ... of men\". An acquaintance believed that Breckinridge's move to Iowa Territory was motivated, in part, by the fact that it was a free territory under the Missouri Compromise.\n\nAfter returning to Kentucky, Breckinridge became friends with abolitionists Cassius Marcellus Clay, Garrett Davis, and Orville H. Browning. He represented freedmen in court and loaned them money. He was a Freemason and member of the First Presbyterian Church, both of which opposed slavery. Nevertheless, because blacks were educationally and socially disadvantaged in the South, Breckinridge concluded that \"the interests of both races in the Commonwealth would be promoted by the continuance of their present relations\". He supported the new state constitution adopted in 1850, which forbade the immigration of freedmen to Kentucky and required emancipated slaves to be expelled from the state. Believing it was best to relocate freedmen to the African colony of Liberia, he supported the Kentucky branch of the American Colonization Society. The 1850 Census showed that Breckinridge owned five slaves, aged 11 to 36. Heck recorded that his slaves were well-treated but noted that this was not unusual and proved nothing about his views on slavery.\n\nBecause Breckinridge defended both the Union and slavery in the General Assembly, he was considered a moderate early in his political career. In June 1864, Pennsylvania's John W. Forney opined that Breckinridge had been \"in no sense an extremist\" when elected to Congress in 1851. Of his early encounters with Breckinridge, Forney wrote: \"If he had a conscientious feeling, it was hatred of slavery, and both of us, 'Democrats' as we were, frequently confessed that it was a sinful and an anti-Democratic institution, and that the day would come when it must be peaceably or forcibly removed.\" Heck discounts this statement, pointing out that Forney was editor of a pro-Union newspaper and Breckinridge a Confederate general at the time it was published. As late as the 1856 presidential election, some alleged that Breckinridge was an abolitionist.\n\nBy the time he began his political career, Breckinridge had concluded that slavery was more a constitutional issue than a moral one. Slaves were property, and the Constitution did not empower the federal government to interfere with property rights. From Breckinridge's constructionist viewpoint, allowing Congress to legislate emancipation without constitutional sanction would lead to \"unlimited dominion over the territories, excluding the people of the slave states from emigrating thither with their property\". As a private citizen, he supported the slavery protections in the Kentucky Constitution of 1850 and denounced the Wilmot Proviso, which would have forbidden slavery in territory acquired in the Mexican\u2013American War. As a state legislator, he declared slavery a \"wholly local and domestic\" matter, to be decided separately by the residents of each state and territory. Because Washington, D.C., was a federal entity and the federal government could not interfere with property rights, he concluded that forced emancipation there was unconstitutional. As a congressman, he insisted on Congress's \"perfect non-intervention\" with slavery in the territories. Debating the 1854 Kansas\u2013Nebraska Act, he explained, \"The right to establish [slavery in a territory by government sanction] involves the correlative right to prohibit; and, denying both, I would vote for neither.\"", "doc_id": "8e77a304-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Meat_%26_Livestock_Australia", "document": "Meat & Livestock Australia (M&LA) is an independent company which regulates standards for meat and livestock management in Australian and international markets. Headquartered in North Sydney, Australia; M&LA works closely with the Australian government, and the meat and livestock industries. As a multi-faceted authority, M&LA has numerous roles across the financial, public and research sectors. The M&LA corporate group conducts research and offers marketing services to meat producers, government bodies and market analysts alike. Forums and events are also run by M&LA aim to provide producers with the opportunity to engage with other participants in the supply chain.\n\nThe M&LA corporate group is led by Meat and Livestock Limited (M&LA Ltd.), which is the parent company of two subsidiaries that have diverse roles in the meat and livestock industry. The Integrity System Company (ISC) and the MLA Donor Company (MDC) are wholly owned subsidiaries by M&LA. Numerous studies into Australia's livestock production and marketing are funded or operated by M&LA. The corporate group also participates in environmental initiatives alongside government authorities and other research bodies, which aim to address the contribution of the livestock industry to climate change in Australia.\n\nIn its research and data analysis capacity, M&LA generates the Eastern Young Cattle Indicator (EYCI) and supports the implementation of Meat Standards Australia (MSA) in the Australian meat industry. M&LA also conducts educational programs regarding the production and consumption of red meat. Statutory obligations which have been imposed upon M&LA by the Australian government, require the corporate group to undergo regular independent reviews of its performance and efficiency. Marketing campaigns are produced by M&LA to promote red meat consumption, however, many advertisements have been subject to criticism, regarding cultural appropriation and discrimination allegations.\n\nThe Coronavirus (COVID-19) outbreak has disrupted economic markets and production globally. The COVID-19 pandemic has also impacted the revenue of the M&LA corporate group. In the 2019\u201320 financial year, M&LA produced an overall revenue of A$269.7 million. M&LA experienced a 0.1% drop in revenue compared to the 2018\u201319 financial year, in which the corporate group accumulated a total revenue of A$269.9 million.\n\nThe COVID-19 outbreak has disrupted many of the initiatives managed by M&LA, which have consequently been cancelled or indefinitely postponed. M&LA has introduced online educational programs and social forums, which aim to educate the Australia population. Research and data analysis conducted by M&LA has also been interrupted by the pandemic, however, analysis of global markets has been undertaken by M&LA to provide information on conditions in the meat and livestock industries. M&LA aims to directly support producers through the COVID-19 pandemic by providing accessible resources regarding COVID-19 restrictions and mental health support. In 2020, the annual \"Red Meat\" event which is hosted by M&LA was cancelled due to COVID-19. Jason Strong, the managing director of M&LA, commented on the cancellation: \"large events such as Red Meat are just not feasible in the current environment and so the only sensible course of action was to cancel for 2020.\"\n\nDespite this cancellation, the M&LA Annual General Meeting (AGM) for 2020 is projected to be held online.\n\nThe COVID-19 outbreak has made collecting data and statistics from local and global markets more difficult. Due to COVID-19 restrictions and reduced access to data, M&LA temporarily ceased production of the EYCI between March and June 2020. Following the easing of COVID-19 restrictions throughout Australia, the market indicator returned in the last week of June 2020. M&LA aims to continue providing producers and market analysts with data on international market conditions by implementing studies in foreign markets, which are of interest to Australian producers in terms of meat export volumes. In 2018\u201319, Australia exported 72% of all beef and veal production, and China accounted for 24% of Australia's beef exports in 2019. M&LA conducted consumer research in China to analyse consumer behaviour and growth trends during the COVID-19 pandemic. The findings from the study indicated a continued increase in consumer demand for red meat in China, with demand for Australian beef increasing by 43% in Chinese markets during the pandemic. These studies seek to improve producer confidence and provide data for market analysts during the COVID-19 pandemic.", "doc_id": "8e77a408-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Archaic_Greece", "document": "Archaic Greece was the period in Greek history lasting from circa 800 BC to the second Persian invasion of Greece in 480 BC,[1] following the Greek Dark Ages and succeeded by the Classical period. In the archaic period, Greeks settled across the Mediterranean and the Black Seas, as far as Marseille in the west and Trapezus (Trebizond) in the east; and by the end of the archaic period, they were part of a trade network that spanned the entire Mediterranean.\n\nThe archaic period began with a massive increase in the Greek population and of significant changes that rendered the Greek world at the end of the 8th century entirely unrecognisable from its beginning. According to Anthony Snodgrass, the archaic period was bounded by two revolutions in the Greek world. It began with a \"structural revolution\" that \"drew the political map of the Greek world\" and established the poleis, the distinctively Greek city-states, and it ended with the intellectual revolution of the Classical period.\n\nThe archaic period saw developments in Greek politics, economics, international relations, warfare and culture. It laid the groundwork for the Classical period, both politically and culturally. It was in the archaic period that the Greek alphabet developed, the earliest surviving Greek literature was composed, monumental sculpture and red-figure pottery began in Greece and the hoplite became the core of Greek armies.\n\nIn Athens, the earliest institutions of democracy were implemented under Solon, and the reforms of Cleisthenes at the end of the archaic period brought in Athenian democracy as it was during the Classical period. In Sparta, many of the institutions credited to the reforms of Lycurgus were introduced during the archaic period, the region of Messenia was brought under Spartan control, helotage was introduced and the Peloponnesian League was founded and made Sparta a dominant power in Greece.\n\nThe word archaic derives from the Greek word archaios, meaning 'old', and refers to the period in ancient Greek history before the classical period. The archaic period is generally considered to have lasted from the beginning of the 8th century BC until the beginning of the 5th century BC, with the foundation of the Olympic Games in 776 BC and the Second Persian invasion of Greece in 480 BC forming notional starting and ending dates. The archaic period was long considered to have been less important and historically interesting than the classical period and was studied primarily as a precursor to it. More recently, archaic Greece has come to be studied for its own achievements. With this reassessment of the significance of the archaic period, some scholars have objected to the term archaic because of its connotations in English of being primitive and outdated. No term which has been suggested to replace it has gained widespread currency, however, and the term is still in use.\n\nMuch evidence about the Classical period of ancient Greece comes from written histories, such as Thucydides's History of the Peloponnesian War. By contrast, no such evidence survives from the archaic period. Surviving contemporary written accounts of life in the period are in the form of poetry. Other written sources from the archaic period include epigraphical evidence, including parts of law codes, inscriptions on votive offerings and epigrams inscribed on tombs. However, none of that evidence is in the quantity for which it survives from the classical period.[8] What is lacking in written evidence is made up for in the rich archaeological evidence from the archaic Greek world. Indeed, although much knowledge of Classical Greek art comes from later Roman copies, all surviving archaic Greek art is original.\n\nOther sources for the archaic period are the traditions recorded by later Greek writers such as Herodotus. However, those traditions are not part of any form of history that would be recognised today. Those transmitted by Herodotus were recorded whether or not he believed them to be accurate. Indeed, Herodotus did not even record any dates before 480 BC.\n\nThe Greek population doubled during the eighth century, resulting in more and larger settlements than previously. The largest settlements, such as Athens and Knossos, might have had populations of 1,500 in 1000 BC; by 700 they might have held as many as 5,000 people. This was part of a wider phenomenon of population growth across the Mediterranean region at this time, which may have been caused by a climatic shift that took place between 850 and 750, which made the region cooler and wetter. This led to the expansion of population into uncultivated areas of Greece and was probably also a driver for colonisation abroad.\n\nAncient sources give us little information on mortality rates in archaic Greece, but it is likely that not many more than half of the population survived to the age of 18: perinatal and infant mortality are likely to have been very high. The population of archaic Greece would have consequently been very young \u2013 somewhere between two-fifths and two-thirds of the population might have been under 18. By contrast, probably less than one in four people were over 40, and only one in 20 over the age of 60.\n\nEvidence from human remains shows that the average age at death increased over the archaic period, but there is no clear trend for other measures of health. The size of houses gives some evidence for prosperity within society; in the eighth and seventh centuries, the average house size remained constant around 45\u201350 m2, but the number of very large and very small houses increased, indicating increasing economic inequality. From the end of the seventh century, this trend reversed, with houses clustering closely around a growing average, and by the end of the archaic period the average house size had risen to about 125 m2.", "doc_id": "8e77a520-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Sale_el_Sol", "document": "Sale el Sol (English: The Sun Comes Out) is the ninth studio album by Colombian singer and songwriter Shakira, released on 19 October 2010, by Epic Records. The album marks a return to Shakira's signature Latin pop sound after the electropop record She Wolf (2009). The singer split the album into three musical \"directions\": a romantic side, a \"rock and roll\" side, and a \"Latino, tropical\" side. The latter two \"directions\" experiment with rock and merengue music, respectively. As co-producer, Shakira enlisted collaborators including Josh Abraham, El Cata, Gustavo Cerati, John Hill, Lester Mendez, and Residente from Calle 13.\n\nFive singles were released from Sale el Sol. The lead single \"Loca\" peaked atop the record charts of Italy, Spain, and Switzerland and the Billboard Hot Latin Songs chart in the United States. The third single, \"Rabiosa,\" reached top ten positions in Austria, Belgium, Italy and Spain. The other singles achieved moderate chart success in Hispanic regions. Shakira embarked on The Sun Comes Out World Tour in late-2010 to promote the album.\n\nAt the 2011 Latin Grammy Awards ceremony, Sale el Sol won the award for Best Female Pop Vocal Album and was also nominated for Album of the Year. A success throughout Europe and Latin America, the album reached number one on the charts in Belgium, Croatia, France, Mexico, Portugal and Spain. In the United States, it debuted at number seven on the Billboard 200 chart and at number one on both the Top Latin Albums and Latin Pop Albums charts. Sale el Sol attained numerous record certifications in several regions across the globe, including multi-platinum certifications in Italy, Mexico, Spain, Switzerland and Poland, and diamond certifications in Brazil, France, Colombia and United States (Latin).\n\nSale el Sol is considered to be Shakira's return to her \"roots\" and is a \"fusion between rock and pop heavily influenced from Latino and Colombian music\". Shakira said there are three \"directions\" of Sale el Sol: a romantic one, a \"very rock and roll\" one, and a \"Latino, tropical\" one. Explaining the romantic \"direction\" of the album, she said that it was something \"which I hadn't tapped into for the past three years, but it suddenly came to me and I couldn't hold it back. So it\u2019s [the album has] got songs that are very intense, very romantic [sic]\". Examples include ballads like \"Antes de las Seis\" (\"Before Six O'Clock\") and \"Lo Que Mas\" (\"The Most\"); in the former Shakira delivers \"sad, emotional, and heartfelt vocals,\"[15] while in the latter she sings over a piano and string-supplemented melody. About the rock and roll \"direction\" of the album, Shakira said \"I started my career as a rock artist and then I kind of crossed over into pop, so it\u2019s been fun to re-encounter that side of my artistic personality\".\n\nThe title track is an acoustic guitar-driven alternative rock and Latin pop-infused song, while \"Devoci\u00f3n\" (\"Devotion\") is a techno-influenced alternative rock track in which Shakira \"beats all U2-inspired arena rockers at their own game,\" according to AllMusic critic Stephen Thomas Erlewine. The \"sultry, energetic, bass-laden\" \"Tu Boca\" (\"Your Mouth\") finds influences from new wave music. \"Islands\" is a cover of the original song of the same name by English indie pop band The xx. Shakira adds a few house music elements to the original art pop song.\n\nThe \"Latino\" and tropical side of the album is prominently influenced by merengue music. The genre is characterized by the use of the accordion and the percussion instrument tambora. \"Loca\" (\"Crazy\"), is Shakira's interpretation of El Cata's song \"Loca Con Su Tiguere\", and is composed of horn-heavy merengue beats set over techno dance percussion beats. Similarly, \"Rabiosa\" (\"Rabid\") is Shakira's interpretation of El Cata's song \"La Rabiosa\", and is a fast-paced merengue-influenced dance track. In addition to merengue, songs like \"Addicted to You\", which features \"bilingual lyrics, a very 70's chorus and Copacabana sounds\", are influenced by reggaeton music. \"Gordita\" (\"Chubby\"), a duet between Residente Calle 13 and Shakira, is a cumbia and Latin rap hybrid.\n\nTalking about the album's lyrical content, Shakira said that there are some songs \"that are just to dance to in a club, that don\u2019t have a big transcendence\". In \"Rabiosa\", Shakira sings about her partner's sex appeal. \"Loca\" expresses Shakira's erratic and obsessive behaviour towards her lover, more so than his other leading lady. However, Shakira also said that there are some songs which \"will remain in people\u2019s hearts and people\u2019s consciousness, sometimes forever\". She described these tracks as \"songs that have the power to feed people\u2019s relationships and states of mind and states of spirit\". According to Billboard, the title track is composed of \"evocative and hopeful\" lyrics which are dedicated to Argentine singer-songwriter and Shakira's friend Gustavo Cerati, who had been in a coma around the time of the release of the album. \"Antes de las Seis\" deals with issues of longing, regrets and loneliness. Shakira said these songs are written \"in such a personal and intimate way that at that moment. I'm not really thinking much. I'm just letting it all out\".", "doc_id": "8e77a6ec-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Programme_for_International_Student_Assessment_(2000_to_2012)", "document": "The Programme for International Student Assessment has had several runs before the most recent one in 2012. The first PISA assessment was carried out in 2000. The results of each period of assessment take about one year and a half to be analysed. First results were published in November 2001. The release of raw data and the publication of technical report and data handbook only took place in spring 2002. The triennial repeats follow a similar schedule; the process of seeing through a single PISA cycle, start-to-finish, always takes over four years. 470,000 15-year-old students representing 65 nations and territories participated in PISA 2009. An additional 50,000 students representing nine nations were tested in 2010.\n\nEvery period of assessment focuses on one of the three competence fields of reading, math, science; but the two others are tested as well. After nine years, a full cycle is completed: after 2000, reading was again the main domain in 2009.\n\nThe results for PISA 2003 were released on 14 December 2004. This PISA cycle tested 275,000 15 year-olds on mathematics, science, reading and problem solving and involved schools from 30 OECD member countries and 11 partner countries. Note that for Science and Reading, the means displayed are for \"All Students\", but for these two subjects (domains), not all of the students answered questions in these domains. In the 2003 OECD Technical Report (pages 208, 209), there are different country means (different than those displayed below) available for students who had exposure to these domains.\n\nThe results for the first cycle of the PISA survey were released on 14 November 2001. 265,000 15 year-olds were tested in 28 OECD countries and 4 partner countries on mathematics, science and reading. An additional 11 countries were tested later in 2002.\n\nThe correlation between PISA 2003 and TIMSS 2003 grade 8 country means is 0.84 in mathematics, 0.95 in science. The values go down to 0.66 and 0.79 if the two worst performing developing countries are excluded. Correlations between different scales and studies are around 0.80. The high correlations between different scales and studies indicate common causes of country differences (e.g. educational quality, culture, wealth or genes) or a homogenous underlying factor of cognitive competence. European Economic Area countries perform slightly better in PISA; the Commonwealth of Independent States and Asian countries in TIMSS. Content balance and years of schooling explain most of the variation.\n\nEducation professor Yong Zhao has noted that PISA 2009 did not receive much attention in the Chinese media, and that the high scores in China are due to excessive workload and testing, adding that it's \"no news that the Chinese education system is excellent in preparing outstanding test takers, just like other education systems within the Confucian cultural circle: Singapore, Korea, Japan, and Hong Kong.\"\n\nStudents from Shanghai, China, had the top scores of every category (Mathematics, Reading and Science) in PISA 2009. In discussing these results, PISA spokesman Andreas Schleicher, Deputy Director for Education and head of the analysis division at the OECD\u2019s directorate for education, described Shanghai as a pioneer of educational reform in which \"there has been a sea change in pedagogy\". Schleicher stated that Shanghai abandoned its \"focus on educating a small elite, and instead worked to construct a more inclusive system. They also significantly increased teacher pay and training, reducing the emphasis on rote learning and focusing classroom activities on problem solving.\"\n\nUniversity of Copenhagen Professor Svend Kreiner, who examined in detail PISA's 2006 reading results, noted that in 2006 only about ten percent of the students who took part in PISA were tested on all 28 reading questions. \"This in itself is ridiculous,\" Kreiner told Stewart. \"Most people don't know that half of the students taking part in PISA (2006) do not respond to any reading item at all. Despite that, PISA assigns reading scores to these children.\"\n\nThe stable, high marks of Finnish students have attracted a lot of attention. According to Hannu Simola the results reflect a paradoxical mix of progressive policies implemented through a rather conservative pedagogic setting, where the high levels of teachers' academic preparation, social status, professionalism and motivation for the job are concomitant with the adherence to traditional roles and methods by both teachers and pupils in Finland's changing, but still quite paternalistic culture. Others advance Finland's low poverty rate as a reason for its success. Finnish education reformer Pasi Sahlberg attributes Finland's high educational achievements to its emphasis on social and educational equality and stress on cooperation and collaboration, as opposed to the competition among teachers and schools that prevails in other nations.\n\nOf the 74 countries tested in the PISA 2009 cycle including the \"+\" nations, the two Indian states came up 72nd and 73rd out of 74 in both reading and mathematics, and 73rd and 74th in science. India's poor performance may not be linguistic as some suggested. 12.87% of US students, for example, indicated that the language of the test differed from the language spoken at home. while 30.77% of Himachal Pradesh students indicated that the language of the test differed from the language spoken at home, a significantly higher percent. However, unlike American students, those Indian students with a different language at home did better on the PISA test than those with the same language. India's poor performance on the PISA test is consistent with India's poor performance in the only other instance when India's government allowed an international organization to test its students and consistent with India's own testing of its elite students in a study titled Student Learning in the Metros 2006. These studies were conducted using TIMSS questions. The poor result in PISA was greeted with dismay in the Indian media. The BBC reported that as of 2008, only 15% of India's students reach high school.", "doc_id": "8e77a822-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Savage_(Megan_Thee_Stallion_song)", "document": "\"Savage\" is a song by American rapper Megan Thee Stallion. It was released on March 6, 2020, as part of her EP Suga and later sent to US Top 40 radio formats on April 7, 2020, by 1501 Certified Entertainment and 300 Entertainment as the third single from the EP. The song was written by the artist with Bobby Sessions, Akeasha Boodie, and producer J. White Did It. It went viral on video-sharing app TikTok, with people performing the \"Savage\" dance challenge during the song's chorus.\n\nA remix featuring Beyonc\u00e9 was surprise-released on April 29, 2020, and included on Megan's debut album Good News. \"Savage Remix\" was met with widespread critical acclaim with praise for Megan and Beyonc\u00e9's chemistry and various delivery styles, as well as for fully transforming the song with new verses. The song reached number one on the US Billboard Hot 100 on May 26, 2020, becoming Megan Thee Stallion's first and Beyonc\u00e9's seventh number-one single on the chart. As of May 2021, the song is certified Quadruple Platinum in the US. The remix was critics' second-best song of 2020, with publications such as Lindsey Zoladz of The New York Times, Slate, and The Ringer placing the song at number one on their year-end lists. The remix received two awards at the 63rd Annual Grammy Awards for Best Rap Performance and Best Rap Song. It was also nominated for Record of the Year.\n\nAccording to engineer Eddie \"eMIX\" Hernandez, Megan did the song \"on the spot\", in under an hour. Hernandez explained the recording process: \"The collaboration was going on at the same time. We were building as the song was forming. While he was laying down the snares and the kicks, she was writing to the skeleton of the beat. Once he had the production all ready and sent it over to me, she was ready to go. She had all her writing done. Her recording? She knocks them things out.\" White said that \"it didn't take me more than 10, maybe 15 minutes tops\" to produce the record. White described \"Savage\" as a \"godsend\", adding: \"That song came out of the air man, it came out of the air from God... It was a gift. And straight away, I told her, 'This is going to be a number one record, watch.' When you know, you know.\"\n\nCandace McDuffie of Consequence of Sound noted, in the song, Megan \"paints herself as 'the hood Mona Lisa' while celebrating her complexity.\" Megan employs huge bravado on the song, which, according to HipHopDX's Aaron McKrell, works to her advantage, as she \"surgically pummels a formidable J. White Did It beat into submission, and still makes time for cool quips like \\'I need a mop to clean the floor, it's too much drip, ooh\\'.\" Complex's Jessica McKinney said the beat is \"reminiscent of nostalgic hip-hop music videos set on a Miami beach, and its chorus is expressive, which is perfect for dancing.\"\n\nConsequence of Sound named \"Savage\" as one of the essential tracks off Suga. Complex's Jessica McKinney also named it a \"stand-out track\" from the EP. Vulture commented that the song was \"joyfully conceited\", and that previous single \"'B.I.T.C.H.' is a little lightweight as a first single when there's heat like 'Savage' on deck\". Rob Sheffield of Rolling Stone wrote that Megan is \"at her absolute peak\" and \"on top\". Following its release, The Fader's Salvatore Maicki named \"Savage\" one of the \"10 songs you need in your life this week\", saying Megan checks all of the boxes [classy, bougie, ratchet, sassy, moody, AND nasty] and \"sounds fly as fuck while doing it\". Vice's Kristin Corry listed it as one of the best songs for the month of March 2020, asserting that \"with a hook that acknowledges all parts of her [Megan's] identity, just like each of her EPs introduces a new persona, it's no wonder the world fell in love with it\".", "doc_id": "8e77a91c-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Nebular_hypothesis", "document": "The nebular hypothesis is the most widely accepted model in the field of cosmogony to explain the formation and evolution of the Solar System (as well as other planetary systems). It suggests the Solar System is formed from gas and dust orbiting the Sun. The theory was developed by Immanuel Kant and published in his Universal Natural History and Theory of the Heavens (1755) and then modified in 1796 by Pierre Laplace. Originally applied to the Solar System, the process of planetary system formation is now thought to be at work throughout the universe. The widely accepted modern variant of the nebular theory is the solar nebular disk model (SNDM) or solar nebular model. It offered explanations for a variety of properties of the Solar System, including the nearly circular and coplanar orbits of the planets, and their motion in the same direction as the Sun's rotation. Some elements of the original nebular theory are echoed in modern theories of planetary formation, but most elements have been superseded.\n\nAccording to the nebular theory, stars form in massive and dense clouds of molecular hydrogen\u2014giant molecular clouds (GMC). These clouds are gravitationally unstable, and matter coalesces within them to smaller denser clumps, which then rotate, collapse, and form stars. Star formation is a complex process, which always produces a gaseous protoplanetary disk (proplyd) around the young star. This may give birth to planets in certain circumstances, which are not well known. Thus the formation of planetary systems is thought to be a natural result of star formation. A Sun-like star usually takes approximately 1 million years to form, with the protoplanetary disk evolving into a planetary system over the next 10\u2013100 million years.\n\nThe protoplanetary disk is an accretion disk that feeds the central star. Initially very hot, the disk later cools in what is known as the T Tauri star stage; here, formation of small dust grains made of rocks and ice is possible. The grains eventually may coagulate into kilometer-sized planetesimals. If the disk is massive enough, the runaway accretions begin, resulting in the rapid\u2014100,000 to 300,000 years\u2014formation of Moon- to Mars-sized planetary embryos. Near the star, the planetary embryos go through a stage of violent mergers, producing a few terrestrial planets. The last stage takes approximately 100 million to a billion years.\n\nThe formation of giant planets is a more complicated process. It is thought to occur beyond the frost line, where planetary embryos mainly are made of various types of ice. As a result, they are several times more massive than in the inner part of the protoplanetary disk. What follows after the embryo formation is not completely clear. Some embryos appear to continue to grow and eventually reach 5\u201310 Earth masses\u2014the threshold value, which is necessary to begin accretion of the hydrogen\u2013helium gas from the disk. The accumulation of gas by the core is initially a slow process, which continues for several million years, but after the forming protoplanet reaches about 30 Earth masses (MEarth) it accelerates and proceeds in a runaway manner. Jupiter- and Saturn-like planets are thought to accumulate the bulk of their mass during only 10,000 years. The accretion stops when the gas is exhausted. The formed planets can migrate over long distances during or after their formation. Ice giants such as Uranus and Neptune are thought to be failed cores, which formed too late when the disk had almost disappeared.\n\nThere is evidence that Emanuel Swedenborg first proposed parts of the nebular theory in 1734.Immanuel Kant, familiar with Swedenborg's work, developed the theory further in 1755, publishing his own Universal Natural History and Theory of the Heavens, wherein he argued that gaseous clouds (nebulae) slowly rotate, gradually collapse and flatten due to gravity, eventually forming stars and planets.\n\nPierre-Simon Laplace independently developed and proposed a similar model in 1796 in his Exposition du systeme du monde. He envisioned that the Sun originally had an extended hot atmosphere throughout the volume of the Solar System. His theory featured a contracting and cooling protosolar cloud\u2014the protosolar nebula. As this cooled and contracted, it flattened and spun more rapidly, throwing off (or shedding) a series of gaseous rings of material; and according to him, the planets condensed from this material. His model was similar to Kant's, except more detailed and on a smaller scale. While the Laplacian nebular model dominated in the 19th century, it encountered a number of difficulties. The main problem involved angular momentum distribution between the Sun and planets. The planets have 99% of the angular momentum, and this fact could not be explained by the nebular model. As a result, astronomers largely abandoned this theory of planet formation at the beginning of the 20th century.\n\nA major critique came during the 19th century from James Clerk Maxwell (1831\u20131879), who maintained that different rotation between the inner and outer parts of a ring could not allow condensation of material. Astronomer Sir David Brewster also rejected Laplace, writing in 1876 that \"those who believe in the Nebular Theory consider it as certain that our Earth derived its solid matter and its atmosphere from a ring thrown from the Solar atmosphere, which afterwards contracted into a solid terraqueous sphere, from which the Moon was thrown off by the same process\". He argued that under such view, \"the Moon must necessarily have carried off water and air from the watery and aerial parts of the Earth and must have an atmosphere\". Brewster claimed that Sir Isaac Newton's religious beliefs had previously considered nebular ideas as tending to atheism, and quoted him as saying that \"the growth of new systems out of old ones, without the mediation of a Divine power, seemed to him apparently absurd\".\n\nThe perceived deficiencies of the Laplacian model stimulated scientists to find a replacement for it. During the 20th century many theories addressed the issue, including the planetesimal theory of Thomas Chamberlin and Forest Moulton (1901), the tidal model of James Jeans (1917), the accretion model of Otto Schmidt (1944), the protoplanet theory of William McCrea (1960) and finally the capture theory of Michael Woolfson. In 1978 Andrew Prentice resurrected the initial Laplacian ideas about planet formation and developed the modern Laplacian theory. None of these attempts proved completely successful, and many of the proposed theories were descriptive.\n\nThe birth of the modern widely accepted theory of planetary formation\u2014the solar nebular disk model (SNDM)\u2014can be traced to the Soviet astronomer Victor Safronov. His 1969 book Evolution of the protoplanetary cloud and formation of the Earth and the planets, which was translated to English in 1972, had a long-lasting effect on the way scientists think about the formation of the planets. In this book almost all major problems of the planetary formation process were formulated and some of them solved. Safronov's ideas were further developed in the works of George Wetherill, who discovered runaway accretion. While originally applied only to the Solar System, the SNDM was subsequently thought by theorists to be at work throughout the Universe; as of 1 September 2022 astronomers have discovered 5,157 extrasolar planets in our galaxy.\n\nUse of the term \"accretion disk\" for the protoplanetary disk leads to confusion over the planetary accretion process. The protoplanetary disk is sometimes referred to as an accretion disk, because while the young T Tauri-like protostar is still contracting, gaseous material may still be falling onto it, accreting on its surface from the disk's inner edge. In an accretion disk, there is a net flux of mass from larger radii toward smaller radii.\n\nHowever, that meaning should not be confused with the process of accretion forming the planets. In this context, accretion refers to the process of cooled, solidified grains of dust and ice orbiting the protostar in the protoplanetary disk, colliding and sticking together and gradually growing, up to and including the high-energy collisions between sizable planetesimals.\n\nIn addition, the giant planets probably had accretion disks of their own, in the first meaning of the word. The clouds of captured hydrogen and helium gas contracted, spun up, flattened, and deposited gas onto the surface of each giant protoplanet, while solid bodies within that disk accreted into the giant planet's regular moons.", "doc_id": "8e77aaa2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Type_theory", "document": "In mathematics, logic, and computer science, a type theory is the formal presentation of a specific type system, and in general type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics. Two influential type theories that were proposed as foundations are Alonzo Church's typed \u03bb-calculus and Per Martin-L\u00f6f's intuitionistic type theory. Most computerized proof-writing systems use a type theory for their foundation. A common one is Thierry Coquand's Calculus of Inductive Constructions.\n\nType theory was created to avoid a paradox in a mathematical foundation based on naive set theory and formal logic. Russell's paradox, which was discovered by Bertrand Russell, existed because a set could be defined using \"all possible sets\", which included itself. Between 1902 and 1908, Bertrand Russell proposed various \"theories of type\" to fix the problem. By 1908 Russell arrived at a \"ramified\" theory of types together with an \"axiom of reducibility\" both of which featured prominently in Whitehead and Russell's Principia Mathematica published between 1910 and 1913. This system avoided Russell's paradox by creating a hierarchy of types and then assigning each concrete mathematical entity to a type. Entities of a given type are built exclusively of subtypes of that type, thus preventing an entity from being defined using itself. Russell's theory of types ruled out the possibility of a set being a member of itself.\n\nTypes were not always used in logic. There were other techniques to avoid Russell's paradox. Types did gain a hold when used with one particular logic, Alonzo Church's lambda calculus.\n\nThe most famous early example is Church's simply typed lambda calculus. Church's theory of types helped the formal system avoid the Kleene\u2013Rosser paradox that afflicted the original untyped lambda calculus. Church demonstrated that it could serve as a foundation of mathematics and it was referred to as a higher-order logic.\n\nThe phrase \"type theory\" now generally refers to a typed system based around lambda calculus. One influential system is Per Martin-L\u00f6f's intuitionistic type theory, which was proposed as a foundation for constructive mathematics. Another is Thierry Coquand's calculus of constructions, which is used as the foundation by Coq, Lean, and other \"proof assistants\" (computerized proof writing programs). Type theories are an area of active research, as demonstrated by homotopy type theory.", "doc_id": "8e77ab7e-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/BTSB_anti-D_scandal", "document": "In 1994, the Irish Blood Transfusion Service Board (BTSB) informed the Minister for Health that a blood product they had distributed in 1977 for the treatment of pregnant mothers had been contaminated with the hepatitis C virus. Following a report by an expert group, it was discovered that the BTSB had produced and distributed a second infected batch in 1991. The Government established a Tribunal of Inquiry to establish the facts of the case and also agreed to establish a tribunal for the compensation of victims but seemed to frustrate and delay the applications of these, in some cases terminally, ill women.\n\nThis controversy also sparked an examination of the BTSB's lax procedures for screening blood products for the treatment of haemophilia and exposed the infection of many haemophiliacs with HIV, hepatitis B and hepatitis C.\n\nThe Blood Transfusion Service Board (BTSB) has responsibility for the production and supply of human blood products used for the treatment of various blood-related conditions. In 1970, it began production of anti-D human immunogloblin for the treatment of rhesus negative (blood type) mothers who, having previously given birth to rhesus positive babies, could have anti-bodies that would cause haemolytic disease (HDFN) in the foetus of future pregnancies. If, following a neo-natal blood test, the rhesus (Rh) factor of the infant is found to be incompatible with that of the mother, an anti-D injection can be given to the mother to protect her future pregnancies. If the mother were to develop her own rhesus anti-bodies, she would be required to undergo a course of plasma exchange transfusion throughout her pregnancy to reduce the level of rhesus antibodies in her blood.\n\nIn 1970 the BTSB began manufacturing anti-D human immunoglobulin for intravenous application at its Dublin laboratory using a process developed in 1967 by Professor Hans-Hermann Hoppe of Hamburg's Central Institute for Transfusionmedicine, one of the founders of the German transfusion service, StKB. The process involved the use of ion-exchange chromatography together with an ethanol precipitation, which was thought at the time to inactivate viruses that might be present in donated blood, thereby removing them from the plasma eventually fractionated in the process. In 1972 Professor Hoppe notified the BTSB that he had refined his process to include a plasma quarantine period and ultrafiltration (instead of ethanol precipitation) but the BTSB continued to use his 1967 process. By 1975 it was known that hepatitis was a blood-born disease and that multiple types of hepatitis virus were in circulation. Tests were available to identify the hepatitis A and hepatitis B viruses and although it was suspected that another strain of the virus was responsible for Jaundice in patients whose blood did not test positive for either type, there was no diagnostic test for hepatitis C until 1990.\n\nIn 1976 a pregnant woman (referred to as \"Patient X\" in the report of the Finlay Tribunal) was a patient of Dr. McGuinness, assistant master of the Coombe Maternity Hospital. Having had several pregnancies severely affected by haemolytic disease, Patient X was prescribed a therapeutic course of plasma exchange over a 25-week period, to reduce the antibodies that would damage her foetus. The obstetric consultant suggested to one of the BTSB staff that they could off-set the cost of Patient X's treatment by using the plasma extracted from her (which had high concentrations of anti-D) to manufacture anti-D immunoglobulin. Patient X was never asked to consent to her plasma being used in this way.\n\nHer treatment began in September 1976 and plasma from her first two treatments was mixed with that from other donors in 5 batches of anti-D produced by the BTSB and distributed between January and April 1977. On 4 November 1976, Patient X had a reaction to her plasma exchange and her treatment was suspended temporarily. On 17 November the Coombe Hospital notified the BTSB that Patient X had become jaundiced and was diagnosed as having hepatitis.\n\nDr. McGuinness requested that a sample of her blood be tested for hepatitis B and sent a second sample to the Middlesex Hospital in London. These tests reported negative for hepatitis B (hepatitis C was not recognized not to mention testable at this time.) As Patient X's plasma exchange treatments continued, regular blood samples were sent to the BTSB to monitor the level of rhesus anti-bodies in her blood, each sample labelled \"infective hepatitis\". Despite all senior medical staff at the BTSB being aware of this infection, they continued to take plasma donations from Patient X throughout January 1977 and include these in the pools used to make 16 batches of anti-D which were distributed to maternity hospitals for administration. The number of doses in each batch could vary from 250 to 400 injections.\n\nIn July 1977, the BTSB received a report from the Rotunda Hospital that 3 mothers who had received injections from anti-D batch 238 had subsequently developed hepatitis. On 25 July, the chief biochemist of the BTSB laboratory was instructed to exclude Patient X's plasma from all pools used to manufacture anti-D. She did precisely this but did not, however, consider disposal or recall of existing batches in which Patient X's plasma had already been used and continued to distribute these to hospitals.\n\nSamples from the 16 batches of anti-D that included Patient X's plasma and samples from the 3 Rotunda patients were sent to the Middlesex Hospital for testing, which again were inconclusive as no test for hepatitis C existed. The Scientific Committee of the BTSB began to compile a list of the destinations to which doses from anti-D batch 238 had been sent. It is unclear however, if this was completed or used in any recall operation. Between August and December 1977, the BTSB received notifications of similar cases from the maternity hospitals at the Coombe and Holles Street in Dublin indicating contamination amongst two other batches of anti-D. Despite continued notifications of hepatitis cases in 1977 and 1978, the BTSB issued no national recall of its anti-D product.", "doc_id": "8e77ac6e-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/1960_Los_Angeles_Chargers_season", "document": "The 1960 Los Angeles Chargers season was the team's inaugural season and also the inaugural season of the American Football League (AFL). Head coach Sid Gillman led the Chargers to the AFL Western Division title with a 10\u20134 record, winning eight games out of nine after a 2\u20133 start, and qualifying to play the Houston Oilers in the AFL championship game.\n\nThe Chargers had the right to host the championship game at their home venue, the Los Angeles Memorial Coliseum. However, as the team's attendance for home games was falling below 10,000, league and television officials feared showing empty seats in the 100,000+ seat Coliseum, and they persuaded the Chargers to give up the advantage. The game was moved to Houston's Jeppesen Stadium. The teams had split their two games in the regular season, with the home teams winning, and the host Oilers were 6.5-point favorites to win the title. Down by a point after three quarters, the Chargers gave up an 88-yard touchdown in the fourth quarter and lost, 24\u201316.\n\nThe Chargers' poor attendance figures soon led to speculation that they might leave Los Angeles. In December, owner Barron Hilton denied that he was planning a move, but in late January he relocated the Chargers down the coast to Balboa Stadium in San Diego for the 1961 season. The team would not return to Los Angeles until 2017.\n\nThe AFL granted a Los Angeles franchise to Barron Hilton on August 14, 1959; the nickname \"Chargers\" was announced on October 27. Hilton's first major signing was former Notre Dame coach and administrator Frank Leahy, who became the club's first general manager on October 14 and began the search for a head coach. Leahy also employed Don Klosterman as Director of Personnel, to help sign new players.\n\nBob McBride, a former assistant of Leahy's at Notre Dame, was named the first Chargers head coach on November 19, but McBride changed his mind within 24 hours of the announcement and pulled out. Subsequently, Leahy had several talks with Los Angeles Rams offensive line coach Lou Rymkus about the vacancy, but Rymkus ultimately joined another AFL team, the Houston Oilers.\n\nOn December 12, Sid Gillman left the NFL's Los Angeles Rams after five years as their head coach. The Rams had reached the NFL Championship Game in Gillman's first season in charge, but went 2-10 in 1959. After his exit, Gillman considered retiring from football to become a stockbroker, but was soon lured to the AFL when approached by Hilton, signing a three-year contract on January 7, 1960, having been out of work for just 26 days.\n\nGillman recruited four assistant coaches in the months that followed. Both Joe Madro and Jack Faulkner had been on Gillman's staff with the Rams; they were installed as offensive line and defensive backfield coaches, respectively. Al Davis was serving as the defensive coordinator at the University of Southern California when Gillman called to offer him a post of his choosing. Davis agreed, taking over the offensive ends, as he wanted to be involved with the passing game. The group was completed on February 1 by the addition of Chuck Noll as defensive line coach. Noll had recently concluded a seven-year career with the Cleveland Browns where he had served as both offensive lineman and linebacker. Three of the coaches on this five-man team are now in the Hall of Fame: Gillman, Davis and Noll.\n\nIn July 1960, Leahy resigned as general manager due to ill health, and Gillman took over the role on top of his head coaching duties.\n\nLos Angeles won their first regular season game with a late comeback. Dallas had touchdown drives of 60 and 94 yards either side of a Charger punt, and led 13\u20130 midway through the 2nd quarter. Later, Dallas had to punt from deep in their own territory, and Los Angeles took over on the Texan 46. They scored their first touchdown on the next play as Jack Kemp faked a handoff and threw a deep pass down the left sideline. Ralph Anderson caught the ball at the five and back-pedaled into the end zone. Dallas responded with an 80-yard touchdown drive, and led 20\u20137 at the break.\n\nIn the 3rd quarter, Los Angeles reached the Dallas eight yard line, but Kemp was sacked on 4th down and goal. They appeared to have been stopped on downs again in the 4th quarter, but Anderson had drawn a pass interference penalty, and the drive continued. Running back Paul Lowe completed a 24-yard pass to Anderson, then Kemp scrambled in from the Dallas 7 yard line, diving across the goal line with 9:38 to play.\n\nFollowing a Texan three-and-out, the Chargers began the winning drive on their own 10. Penalties were again key. Dallas recovered a fumble in Charger territory, but the turnover was negated by a flag. Kemp then converted a 3rd and 15 with a 16-yard completion to Howard Clark, before Los Angeles reached a 4th and 6 from the Dallas 37 yard line. Again, Kemp's pass was incomplete, but again a pass interference call (this time drawn by Dave Kocourek) saved the drive. Five plays later, it was 3rd and goal from the four yard line, and Kemp found Howie Ferguson in the left flat for the winning score with 2:15 to play. Jimmy Sears stopped Dallas with a 4th-down interception, and the Chargers ran out the clock.\n\nKemp was 24 of 41 for 275 yards, two touchdowns and no interceptions. Anderson caught 5 passes for 103 yards and a touchdown; Lowe was the leading Charger rusher with just 20 yards.", "doc_id": "8e77ad90-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Mirai_Nagasu", "document": "Mirai Aileen Nagasu (born April 16, 1993) is an American figure skater. She is a 2018 Olympic Games team event bronze medalist, three-time Four Continents medalist (silver in 2016, bronze in 2011 and 2017), the 2007 JGP Final champion, a two-time World Junior medalist (silver in 2007, bronze in 2008), and a seven-time U.S. national medalist (gold in 2008, silver in 2010 and 2018, bronze in 2011 and 2014, pewter in 2016 and 2017).\n\nIn 2008, Nagasu became the youngest woman since Tara Lipinski in 1997 to win the U.S. senior ladies' title, and the second-youngest in history at the time. She is the first lady since Joan Tozzer in 1937 and 1938 to win the junior and senior national titles in consecutive years. Nagasu represented the United States at the 2010 Winter Olympics at the age of 16 and placed 4th in the ladies' event. In 2017, she landed the difficult triple Axel jump for the first time in international competition at the 2017 CS U.S. Classic. During her free skate in the team event at the 2018 Olympics, she became the first American ladies' singles skater to land a triple Axel at the Olympics, and the third woman from any country to do so. This also made her the first senior ladies skater ever to land eight triple jumps (the maximum allowed in the free skate under the Zayak rule) cleanly in international competition.\n\nMirai Aileen Nagasu was born in Montebello, Los Angeles County, California and raised in Arcadia, California. Her parents own Restaurant Kiyosuzu, a Japanese sushi restaurant in Arcadia. They are immigrants from Japan and their daughter had dual citizenship but was required by Japanese law to relinquish it before her 22nd birthday, so she chose U.S. citizenship. Nagasu speaks a mixture of Japanese and English at home with her parents. Her mother, Ikuko, was diagnosed with thyroid cancer in the fall of 2009. Mirai (\u672a\u6765) means \"future\" in Japanese, while her last name is written as \u9577\u6d32 in kanji.\n\nNagasu graduated from Foothills Middle School in the spring of 2007 and entered Arcadia High School in the fall of 2007. In 2009, she began attending an online high school. She graduated from the Capistrano Connections Academy in June 2011 and was accepted into the University of California, Irvine but said the commute was not feasible. Around 2015, she enrolled at the University of Colorado Colorado Springs and has taken courses in the business field. Nagasu graduated from UCCS with a degree in business administration in December 2020.\n\nDuring the 2015\u201316 NHL season, Nagasu worked for the Colorado Avalanche as an ice girl and worked as a franchise ambassador at events in the Greater Denver such as learn to skate programs.\n\nFor the 2009\u201310 season, Nagasu was assigned to the 2009 Cup of China and the 2009 Skate Canada International Grand Prix events. She won the short program at the 2009 Cup of China, but placed sixth in the free skate to finish fifth overall. A few weeks later she competed at the 2009 Skate Canada, where she finished fourth.\n\nIn January 2010, Nagasu competed at U.S. Nationals, where she placed first in the short program with a score 70.06 points. She placed third in the free skate, winning the silver medal behind Rachael Flatt. Following the event, she was nominated to represent the United States at the 2010 Winter Olympics and was also selected to compete at the World Championships along with Flatt.\n\nDuring the 2010 Winter Olympics, she placed sixth in the short program. She placed fifth in the free skate and fourth overall, earning new personal bests for the free skate score and combined total. At Worlds, Nagasu led the short program with a personal best score of 70.40 points, positioned ahead of Mao Asada by 2.32 points. Ranked eleventh in the free skate, she finished in seventh place overall.\n\nA stress fracture kept Nagasu out of training for a month during the summer. She returned to practice in September 2010. Nagasu started her 2010\u201311 Grand Prix season finishing fourth at the 2010 Cup of China. At the 2010 Troph\u00e9e Eric Bompard, she placed second in the short program. In the free skate, Nagasu had trouble on her layback spin. She still earned enough points to win the free skate, scoring 109.07, and won the silver overall, her first senior Grand Prix medal. If she had executed the spin correctly, she would have won the gold.\n\nAt U.S. Nationals, Nagasu was in first place after the short program with a small lead. In the long program, she received zero points for a botched flying sit spin and finished third overall to win the bronze medal. Nagasu was assigned to the 2011 Four Continents, where she won the bronze medal with an overall score of 189.46. She was the first alternate to the 2011 World Championships but did not compete despite Rachael Flatt being injured.\n\nLooking back on the season, Nagasu said, \"Getting my body back into shape [after the injury] was tough. I really did not get back into shape until Four Continents, where I did the best I could.\" Focus had also been an issue; \"She was thinking of some things that didn't go so well before or something that was coming up -- all kinds of different thoughts instead of getting out there and doing each thing that was coming along and just doing the program\", according to Carroll.\n\nNagasu is considered a strong spinner, and has received a straight +3.00 grade of execution for her layback spin. She often performs the Biellmann spin with a variation in which her hands are on the boot of her skate instead of the blade.\n\nNagasu has worked on improving her jumps to avoid under-rotations. She has added a triple Axel jump to her programs, landing two fully rotated triple Axel jumps at the 2017 CS U.S. International Figure Skating Classic with the negative grade of execution. She is the second US woman skater to have landed a triple Axel jump internationally after Tonya Harding. In 2018, she became the first U.S. woman skater to have landed the triple Axel in an Olympic competition.", "doc_id": "8e77aec6-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Timba", "document": "Timba is a Cuban genre of music based on Cuban son with salsa, American Funk/R&B and the strong influence of Afro-Cuban folkloric music. Timba rhythm sections differ from their salsa counterparts, because timba emphasizes the bass drum, which is not used in salsa bands. Timba and salsa use the same tempo range and they both use the standard conga marcha. Almost all timba bands have a trap drummer. Timbas also often break the basic tenets of arranging the music in-clave. Timba is considered to be a highly aggressive type of music, with rhythm and \"swing\" taking precedence over melody and lyricism. Associated with timba is a radically sexual and provocative dance style known as despelote (literally meaning chaos or frenzy). It is a dynamic evolution of salsa, full of improvisation and Afro Cuban heritage, based on son, Rumba and mambo, taking inspiration from Latin jazz, and is highly percussive with complex sections. Timba is more flexible and innovative than salsa, and includes a more diverse range of styles. Timba incorporates heavy percussion and rhythms which originally came from the barrios of Cuba.\n\nBefore it became the newest Cuban music and dance craze, timba was a word with several different uses yet no particular definition, mostly heard within the Afro-Cuban genre of rumba. A timbero was a complimentary term for a musician, and timba often referred to the collection of drums in a folklore ensemble. Since the 1990s, timba has referred to Cuba's intense and slightly more aggressive music and dance form.\n\nAs opposed to salsa, whose roots are strictly from son and the Cuban conjunto bands of the 1940s and 1950s, timba represents a synthesis of many folkloric (rumba, guaguanc\u00f3, bat\u00e1 drumming and the sacred songs of santer\u00eda.), and popular sources (even taking inspiration from non Afro-Cuban musical genres such as rock, jazz, funk, and Puerto Rican folk). According to Vincenzo Perna, author of Timba: The Sound of the Cuban Crisis, timba needs to be spoken of because of its musical, cultural, social, and political reasons; its sheer popularity in Cuba, its novelty and originality as a musical style, the skill of its practitioners, its relationship with both local traditions and the culture of the black Diaspora, its meanings, and the way its style brings to light the tension points within society. In addition to timbales, timba drummers make use of the drum set, further distinguishing the sound from that of mainland salsa. The use of synthesised keyboard is also common. Timba songs tend to sound more innovative, experimental and frequently more virtuosic than salsa pieces; horn parts are usually fast, at times even bebop influenced, and stretch to the extreme ranges of all instruments. Bass and percussion patterns are similarly unconventional. Improvisation is commonplace.\n\nDuring the Special Period of the early 1990s, timba became a significant form of expression for the cultural and social upheaval that occurred. The Special Period was a time of economic downfalls and hardships for the Cuban people. In the wake of the dissolution of the Soviet Union, Cuba's main trading partner, the country experienced its worse crisis since the revolution. Cuba now opened its doors to tourism, and the influx of tourists to the island helped broaden the appeal of the music and dance of timba. The stand-off between Cuba and most of the rest of the world gave timba space to breathe new life into the city, causing the nightlife and party scene to grow. Timba's danceable beat and energizing sound was popular among the tourists at a time when the music and dance scene was indirectly helping provide some support for Cuba's struggling economy.\n\nWhile timba developed at the beginning of a decade when Afro-Cuban conservatory graduates were turning to popular music catering to inner-city youth, its growth followed that of the music and tourist industries, as the state tried to address the economic challenges of the post-Soviet world. Timba lyrics generated considerable controversy due to their use of vulgar and witty street language, and also because they made veiled references to public concerns including prostitution, crime, and the effects of tourism on the island, which had only rarely been addressed by other musicians. This was not normal in Cuban texts before. There was also a reaffirmation of the Cuban identity. The difference of opinion between the old traditionalists going abroad for success and the young bloods stuck at home \u2013 and the difference in financial rewards \u2013 was bound to lead to friction. In the subsequent time, timba has largely crossed over from an accessible, mainstream medium to one that is directed at wealthy elites in high-end venues. This places timba in contrast with rap, which has come in some ways to fill the role of the music of the masses.", "doc_id": "8e77afd4-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Onew", "document": "Lee Jin-ki (born December 14, 1989), better known by his stage name Onew, is a South Korean singer, songwriter, actor and host. Born in Gwangmyeong, Gyeonggi-do, Onew was discovered at the 2006 SM Academy Casting and signed a contract with SM Entertainment the day after his audition. He debuted as the lead vocalist and leader of boy group Shinee in May 2008, who went on to become one of the best-selling artists in South Korea.\n\nAs a singer, he has participated in the original soundtracks for various TV series and released collaborations with various artists. He made his solo debut on December 5, 2018, with the release of his first extended play, Voice, five days before his military conscription on December 10, 2018. The EP peaked at number two on South Korea's Gaon Album Chart. He released his second EP Dice on April 11, 2022, which peaked at number three on Gaon Album Chart. He made his solo debut in Japan on July 6, 2022, with the release of his first studio album Life Goes On. Onew has also contributed to songwriting for both himself and Shinee.\n\nAs an actor, Onew was cast in multiple musicals, such as Rock of Ages (2010), Shinheung Military Academy (2019), and Midnight Sun (2021\u20132022) and participated in various television dramas, mostly known for the roles of Baek Su in JTBC's sitcom Welcome to Royal Villa and the cardiothoracic resident Lee Chi-hoon in the popular KBS2 drama Descendants of the Sun (2016).\n\nOnew is the only child of his parents. He became interested in music at a young age when he started to play the piano. Later, his passion inspired him to pursue a career in the music industry. Onew graduated from the Gwangmyeong Information Industry High School. In his senior year, he ranked second in his grade and held the highest national score for his high school SATs exam. He started attending Chungwoon University where he majored in broadcasting music. After receiving his bachelor's degree he continued to attend the university for his master's degree which he earned in practical music.\n\nOnew is one of the main vocalists of Shinee and is known for his distinctively unique vocal color and for his calm and understated voice, providing the strong vocal foundation of the group with fellow member Jonghyun. In June 2014, Onew underwent a vocal cord polyp removal and vocal fold mucosa reconstruction operation, which made him unable to sing for a few months. On December 2014, Kim Yeon-woo, Onew's vocal coach, revealed during a radio broadcast that Onew's condition had improved after the surgery. He also confirmed that Onew's vocal range \"improved and he can make sounds comfortably too\".\n\nIn order to support Onew in the musical Rock of Ages, in 2010, his fans had donated 1.44 tons of rice, a common practice for fans in South Korea, with the expectation that the idol will then donate to a charity of their choice\u2014Onew donated it to help feed North Korean children, which was prepared by the Child Development Program. Onew also donated 770 kg of rice to children in need in South Korea in May 2010. In 2016, Onew donated roughly 1.2 million won to the Korean Heart Association.\n\nIn August 2017, Onew was accused of sexual harassment. The victim stated that on August 12 in a night club in Gangnam, Onew, who was intoxicated, touched her leg two or three times as he tried to stand up, over the course of two hours while she was dancing on one of the club's multiple dancing platforms that was adjacent to where Onew was seated. However, she acknowledged that such incidents could happen under the influence of alcohol and withdrew the charge. Nevertheless, the case was forwarded to the prosecution as a recommendation for indictment without detention. After a four-month hiatus, Onew posted a letter of apology, which was accepted widely among his fans, but others called for him to step down from Shinee. On April 5, 2018, SM Entertainment announced that the charges against him had been dismissed by the prosecutors.", "doc_id": "8e77b0ba-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Gotland", "document": "Gotland (Gutland in Gutnish), also historically spelled Gottland or Gothland, is Sweden's largest island. It is also a province, county, municipality, and diocese. The province includes the islands of F\u00e5r\u00f6 and Gotska Sand\u00f6n to the north, as well as the Karls\u00f6 Islands (Lilla and Stora) to the west. The population is 58,595, of which about 23,600 live in Visby, the main town. Outside Visby, there are minor settlements and a mainly rural population. The island of Gotland and the other areas of the province of Gotland make up less than one percent of Sweden's total land area. The county formed by the archipelago is the second smallest by area and is the least populated in Sweden. In spite of the small size due to its narrow width, the driving distance between the furthermost points of the populated islands is about 170 kilometres (110 mi).\n\nGotland is a fully integrated part of Sweden with no particular autonomy, unlike several other offshore island groups in Europe. Historically there was a linguistic difference between the archipelago and the mainland with Gutnish being the native language. In recent centuries, Swedish took over almost entirely and the island is virtually monolingually Swedish in modern times. The archipelago is a very popular domestic tourist destination for mainland Swedes, with the population rising to very high numbers during summers. Among reasons include the sunny climate and the extensive shoreline on mild water. During summer Visby hosts the political event Almedalen Week followed by the Medieval Week, further boosting visitor numbers. In winter, Gotland usually remains surrounded by ice-free water and has mild weather.\n\nGotland has been inhabited since approximately 7200 BC. Its location in the centre of the Baltic Sea has historically given it great strategic importance. The island's main sources of income are agriculture, food processing, tourism, information technology services, design, and some heavy industry such as concrete production from locally mined limestone. From a military viewpoint, it occupies a strategic location in the Baltic Sea.\n\nGotland has a semi-continental variety of a marine climate (Cfb). This results in larger seasonal differences than typical of marine climates in spite of it being surrounded by the Baltic Sea for large distances in all directions. This is due to strong continental winds travelling over the sea from surrounding great landmasses. Seasonal temperature variation is smaller in more isolated places on the island such as Hoburgen or \u00d6stergarnsholm, having warmer autumn and winter, but are cooler during spring and summer days. Seasonal lag being exceptionally strong in the weather station \u00d6stergarnsholm. As an example, December is warmer than March with temperature lows being similar to April. August is typically the warmest month, an unusual occurrence in Swedish sites. In capital Visby, July and August temperatures tend to be quite even.\n\nSince winters usually remain just above freezing and brackish water remaining liquid longer than freshwater, the sea remains ice-free all year round, except during rare extreme cold waves. The last time the whole passage from the mainland to Gotland froze was in 1987 when icebreakers were used to maintain passenger and goods traffic to the island.\n\nGotland is made up of a sequence of sedimentary rocks of a Silurian age, dipping to the south-east. The main Silurian succession of limestones and shales comprises thirteen units spanning 200 to 500 m (660 to 1,640 ft) of stratigraphic thickness, being thickest in the south, and overlies a 75 to 125 m (246 to 410 ft) thick Ordovician sequence.\n\nIt was deposited in a shallow, hot, and salty sea on the edge of an equatorial continent. The water depth never exceeded 175 to 200 m (574 to 656 ft), and became shallower over time as bioherm detritus and terrestrial sediments filled the basin. Reef growth started in the Llandovery, when the sea was 50 to 100 m (160 to 330 ft), and reefs continued to dominate the sedimentary record. Some sandstones are present in the youngest rocks towards the south of the island, which represent sand bars deposited very close to the shoreline.\n\nThe lime rocks have been weathered into characteristic karstic rock formations known as rauks. Fossils, mainly of crinoids, rugose corals and brachiopods, are abundant throughout the island; pal\u00e6o-sea-stacks are preserved in places.\n\nThe island's main sources of income are agriculture along with food processing, tourism, IT solutions, design and some heavy industry such as concrete production from locally mined limestone. Most of Gotland's economy is based on small scale production. In 2012, there were over 7,500 registered companies on Gotland. 1,500 of these had more than one employee. Gotland has the world's northernmost established vineyard and winery, located in Hablingbo.", "doc_id": "8e77b1a0-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Project_Gemini", "document": "Project Gemini was NASA's second human spaceflight program. Conducted between projects Mercury and Apollo, Gemini started in 1961 and concluded in 1966. The Gemini spacecraft carried a two-astronaut crew. Ten Gemini crews and 16 individual astronauts flew low Earth orbit (LEO) missions during 1965 and 1966.\n\nGemini's objective was the development of space travel techniques to support the Apollo mission to land astronauts on the Moon. In doing so, it allowed the United States to catch up and overcome the lead in human spaceflight capability the Soviet Union had obtained in the early years of the Space Race, by demonstrating: mission endurance up to just under 14 days, longer than the eight days required for a round trip to the Moon; methods of performing extra-vehicular activity (EVA) without tiring; and the orbital maneuvers necessary to achieve rendezvous and docking with another spacecraft. This left Apollo free to pursue its prime mission without spending time developing these techniques.\n\nAll Gemini flights were launched from Launch Complex 19 (LC-19) at Cape Kennedy Air Force Station in Florida. Their launch vehicle was the Gemini\u2013Titan II, a modified Intercontinental Ballistic Missile (ICBM). Gemini was the first program to use the newly built Mission Control Center at the Houston Manned Spacecraft Center for flight control.\n\nThe astronaut corps that supported Project Gemini included the \"Mercury Seven\", \"The New Nine\", and \"The Fourteen\". During the program, three astronauts died in air crashes during training, including both members of the prime crew for Gemini 9. This mission was flown by the backup crew.\n\nGemini was robust enough that the United States Air Force planned to use it for the Manned Orbital Laboratory (MOL) program, which was later canceled. Gemini's chief designer, Jim Chamberlin, also made detailed plans for cislunar and lunar landing missions in late 1961. He believed Gemini spacecraft could fly in lunar operations before Project Apollo, and cost less. NASA's administration did not approve those plans. In 1969, McDonnell-Douglas proposed a \"Big Gemini\" that could have been used to shuttle up to 12 astronauts to the planned space stations in the Apollo Applications Project (AAP). The only AAP project funded was Skylab \u2013 which used existing spacecraft and hardware \u2013 thereby eliminating the need for Big Gemini.\n\nChamberlin designed the Gemini capsule, which carried a crew of two. He was previously the chief aerodynamicist on Avro Canada's Avro Arrow fighter interceptor program. Chamberlin joined NASA along with 25 senior Avro engineers after cancellation of the Canadian Arrow program, and became head of the U.S. Space Task Group's engineering division in charge of Gemini. The prime contractor was McDonnell Aircraft Corporation, which was also the prime contractor for the Project Mercury capsule.\n\nAstronaut Gus Grissom was heavily involved in the development and design of the Gemini spacecraft. What other Mercury astronauts dubbed \"Gusmobile\" was so designed around Grissom's 5'6\" body that, when NASA discovered in 1963 that 14 of 16 astronauts would not fit in the spacecraft, the interior had to be redesigned. Grissom wrote in his posthumous 1968 book Gemini! that the realization of Project Mercury's end and the unlikelihood of his having another flight in that program prompted him to focus all his efforts on the upcoming Gemini program.\n\nThe Gemini program was managed by the Manned Spacecraft Center, located in Houston, Texas, under direction of the Office of Manned Space Flight, NASA Headquarters, Washington, D.C. Dr. George E. Mueller, Associate Administrator of NASA for Manned Space Flight, served as acting director of the Gemini program. William C. Schneider, Deputy Director of Manned Space Flight for Mission Operations served as mission director on all Gemini flights beginning with Gemini 6A.\n\nGuenter Wendt was a McDonnell engineer who supervised launch preparations for both the Mercury and Gemini programs and would go on to do the same when the Apollo program launched crews. His team was responsible for completion of the complex pad close-out procedures just prior to spacecraft launch, and he was the last person the astronauts would see prior to closing the hatch. The astronauts appreciated his taking absolute authority over, and responsibility for, the condition of the spacecraft and developed a good-humored rapport with him.\n\nDeke Slayton, as director of flight crew operations, had primary responsibility for assigning crews for the Gemini program. Each flight had a primary crew and backup crew, and the backup crew would rotate to primary crew status three flights later. Slayton intended for first choice of mission commands to be given to the four remaining active astronauts of the Mercury Seven: Alan Shepard, Grissom, Cooper, and Schirra. John Glenn had retired from NASA in January 1964 and Scott Carpenter, who was blamed by some in NASA management for the problematic reentry of Aurora 7, was on leave to participate in the Navy's SEALAB project and was grounded from flight in July 1964 due to an arm injury sustained in a motorbike accident. Slayton himself continued to be grounded due to a heart problem.\n\nFrom 1962 to 1967, Gemini cost $1.3 billion in 1967 dollars ($7.85 billion in 2020[28]). In January 1969, a NASA report to the US Congress estimating the costs for Mercury, Gemini, and Apollo (through the first crewed Moon landing) included $1.2834 billion for Gemini: $797.4 million for spacecraft, $409.8 million for launch vehicles, and $76.2 million for support.", "doc_id": "8e77b2d6-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Maserati_Quattroporte", "document": "The Maserati Quattroporte is a four-door full-size luxury sports sedan produced by Italian automobile manufacturer Maserati. The name translated from Italian means \"four doors\". The car is currently in its sixth generation, with the first generation introduced in 1963.\n\nThe original Maserati Quattroporte (Tipo AM107) was built between 1963 and 1969. It was a large saloon powered by V8 engines\u2014both firsts for a series production Maserati automobile.\n\nThe task of styling the Quattroporte was given to Turinese coachbuilder Pietro Frua, who drew inspiration from a special 5000 GT (chassis number 103.060) which he had designed in 1962 for Prince Karim Aga Khan. While the design was by Frua, body construction was carried out by Vignale.\n\nThe Quattroporte was introduced at the October\u2013November 1963 Turin Motor Show, where a pre-production prototype was on the Maserati stand next to the Mistral coup\u00e9. Regular production began in 1964. The Tipo 107 Quattroporte joined two other grand tourers, the Facel Vega and the Lagonda Rapide, capable of traveling at speeds of up to 200 km/h (124 mph) on the new motorways in Europe. It was equipped with a 4.1-litre (4,136 cc or 252 cu in) V8 engine, rated at 264 PS (194 kW; 260 hp) DIN at 5,000 rpm, and equipped with either a five-speed ZF manual transmission or a three-speed Borg Warner automatic on request. Maserati claimed a top speed of 230 km/h (143 mph). The car was also exported to the United States, where federal regulations mandated twin round headlamps in place of the single rectangular ones found on European models.\n\nThe first generation of the Quattroporte had a steel unibody structure, complemented by a front subframe. Front suspension was independent, with coil springs and hydraulic dampers. Rear suspension used a coil sprung De Dion tube featuring inboard brakes on the first series, later changed to a more conventional Salisbury leaf sprung solid axle with a single trailing link on the second series. On both axles there were anti-roll bars. Brakes were solid Girling discs all around. A limited slip differential was optional.\n\nThe long lived quad cam, all-aluminium Maserati V8 engine made its d\u00e9but on the Quattroporte. It featured two chain-driven overhead camshafts per bank, 32 angled valves, hemispherical combustion chambers, inserted cast iron wet cylinder liners, and was fed through an aluminium, water-cooled inlet manifold by four downdraft twin-choke Weber carburetors\u2014initially 38 DCNL 5 and 40 DCNL 5 on 4200 and 4700 cars respectively, later changed to 40 DCNF 5 and 42 DCNF 5 starting from December 1968.\n\nThe Quattroporte is a four-door, five-seater saloon with a steel unibody construction. The overall layout remained unchanged from the Biturbo from which the car descended: longitudinal front engine, rear-wheel drive, all-independent suspension with MacPherson struts upfront and trailing arms at the rear. Despite these similarities, the suspension had been re-engineered: rear trailing arms had a tube framework structure like on the Shamal, together with the limited slip differential. These two components were attached to the body via a newly designed tubular subframe.\n\n", "doc_id": "8e77b39e-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Conservation_and_restoration_of_paintings", "document": "The conservation and restoration of paintings is carried out by professional painting conservators. Paintings cover a wide range of various mediums, materials, and their supports (i.e. the painted surface made from fabric, paper, wood panel, fabricated board, or other). Painting types include fine art to decorative and functional objects spanning from acrylics, frescoes, and oil paint on various surfaces, egg tempera on panels and canvas, lacquer painting, water color and more. Knowing the materials of any given painting and its support allows for the proper restoration and conservation practices. All components of a painting will react to its environment differently, and impact the artwork as a whole. These material components along with collections care (also known as preventive conservation) will determine the longevity of a painting. The first steps to conservation and restoration is preventive conservation followed by active restoration with the artist's intent in mind.\n\nTypical, traditional oil, acrylic, and many other types of paintings are made up of various different types of materials, from their paint layers to the materials that make up their supports. Each of these materials requires specific care in handling, displaying, storage, added protective measures, and general environmental conditions. Providing the proper care to each of these materials ensures that the overall condition of the painting is protected.\n\nUsing good protective measures such as attaching a rigid backing to a painting on canvas provides several protections. It reduces the effects of rapid changes in relative humidity around the painting, provides some protection from pressure or direct contact against the canvas back, and protects from vibrations caused by handling or moving. Backing boards also serve to protect from dust and dirt, cracks and deformations from handling, and insect activity. Some of the most commonly used types of backing boards include foam core, heritage board, matboard, cardboard/millboard, coroplast, corrugated plastic sheets, acrylic sheeting, mylar, and fabric.\n\nThe frames around paintings are not just for aesthetic appearances. Frames are also used to protect the more sensitive parts of a painting when handled by hand, and they reduce the potential for damage if the painting is dropped. There are also specialists that work on the conservation and restoration of painting frames.\n\nThe movement of objects places an object at a much greater risk of damage than when it is on display or in storage. Certain techniques and equipment are used any time an art work needs to be transported. These techniques and equipment include using padding lifts and dollies, moving small, fragile objects on carts instead of carrying by hand; lifting objects from underneath by their sturdiest part; and taking extra time and care when on ladders or stairs. In many cases gloves are worn to protect the art work from any dirt or oil that may be on a conservator or object handlers hands. When handling canvas paintings specifically, never presume that the frame is stable and firmly attached. Do not lift or carry a painting by its stretcher bar, or insert your fingers between the stretcher bar and the canvas\n\nIt is estimated that a lack of proper routine maintenance and care is responsible for 95 percent of conservation treatments; the remaining 5 percent results from mishandling objects[7] When developing display and storage methods for works of art, issues regarding relative humidity, temperature, light, pollutants, and pests need to be considered. Location and the types of storage units must be considered as well. Storage areas should be located in areas away from pipes and heating systems, as well as out of areas that are likely to flood and collect dust and dirt. Storage units should be sturdy, adjustable for collections growth so all collections sizes are safely stored, made of materials that will not cause any damage to the paintings (i.e. metal racks), and be free of any hardware or supports that stick out.\n\nMoisture, heat, light, pollutants, and pests can slowly or suddenly cause damage to a painting. These agents of deterioration impact all of the components that make up a painting in various ways.\n\nToo low or too high relative humidity (RH) as well as rapid changes in relative humidity can be damaging to paintings. According to the Canadian Conservation Institute, there are four types of incorrect relative humidity: dampness over 75% RH, RH above or below a critical value for that object, RH above 0%, and RH fluctuations. \"Generally accepted temperature and relative humidity standards for most museum objects and artifacts are 65\u00b0-70\u00b0 F (18\u00b0-21\u00b0 C) at 47%-55% RH.\" The best method of controlling the environment is by using a centralized climate control or HVAC system where incoming air is washed, cleaned, heated, or cooled, adjusted to specific conditions, and then injected into the storage space. An appropriate alternative is a localized climate control system where air conditioners cool the air and absorb some of its moisture while filtering out gross particles. They do not condition the air, nor do they filter air pollutants.\n\nBoth visible and ultraviolet light can cause damage to paintings. In particular, organic materials such as paper, fabric, wood, leather, and colored surfaces. \"Fugitive dyes and colorants used in paints will eventually discolor under exposure to ultraviolet light. The fading of pigments and dyes in paintings will affect the color balance of the image.\" Damage from natural and artificial light exposure can be mitigated by displaying paintings out of direct sunlight, use of blinds, shades, curtains, or shudders, filters on nearby windows, installing dimmers and appropriate wattage light bulbs, and displaying paintings a safe distance from a light source to limit heat exposure.\n\nPollutants can be described as gasses, aerosols, liquids, or solids that have a chemical reaction with any part of a painting. There are three types of pollutants. Airborne pollutants, pollutants transferred by contact, and intrinsic pollutants.\n\nAirborne pollutants which originate from atmospheric sources (ozone, hydrogen sulfide, sulfur dioxide, soot, salts), or emissive products, objects, and people (sulfur-based gases, organic acids, lint, and dander). Their effects can include acidification of papers, corrosion of metals, discoloration of colorants, and efflorescence of calcium-based objects.\n\nPollutants transferred by contact include plasticizer from PVC, sulfur compounds from natural rubber, staining materials from wood, viscous compounds from old polyurethane foams, fatty acids from people or from greasy objects, and impregnation of residue of cleaning agents. The effects of these pollutants can include discoloration or corrosion of a paintings surface.\n\nIntrinsic pollutants are composite objects that have compounds that are harmful to other parts of an object. The effects of these pollutants includes deterioration of the object, acidification, discoloration or staining on an object, speed up degradation processes caused by oxygen, water vapor, or other pollutants.\n\nPests such as rodents and insects have the potential to cause considerable damage to works of art. Preventive measures that may be taken to protect paintings from pests include upgrading building structures to obstruct pest entry, installing better cabinetry with good seals, better control of temperature and humidity in collections and storage areas, keeping food and other organic materials from collection areas, and treatment of outbreaks. Materials that are commonly damaged by pests include: natural fibers, wood, paper, starch adhesives, and egg tempera.", "doc_id": "8e77b4ca-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/National_Autonomous_University_of_Mexico", "document": "The National Autonomous University of Mexico (Spanish: Universidad Nacional Aut\u00f3noma de M\u00e9xico, UNAM) is a public research university in Mexico. It is consistently ranked as one of the best universities in Latin America, where it's also the biggest in terms of enrollment. A portion of UNAM's main campus in Mexico City, known as Ciudad Universitaria (University City), is a UNESCO World Heritage site that was designed by some of Mexico's best-known architects of the 20th century and hosted the 1968 Summer Olympic Games. Murals in the main campus were painted by some of the most recognized artists in Mexican history, such as Diego Rivera and David Alfaro Siqueiros. With acceptance rates usually below 10%, UNAM is also known for its competitive admission process. All Mexican Nobel laureates are either alumni or faculty of UNAM.\n\nUNAM was founded, in its modern form, on 22 September 1910 by Justo Sierra as a secular alternative to its predecessor, the Royal and Pontifical University of Mexico (the first Western-style university in North America, founded in 1551). UNAM obtained administrative autonomy from the government in 1929. This has given the university the freedom to define its own curriculum and manage its own budget without government interference. This has had a profound effect on academic life at the university, which some claim boosts academic freedom and independence. UNAM was also the birthplace of the student movement of 1968.\n\n\"Ciudad Universitaria\" (University City) is UNAM's main campus, located within the Coyoac\u00e1n borough in the southern part of Mexico City. The construction of UNAM's central campus was the original idea of two students from the National School of Architecture in 1928: Mauricio De Maria y Campos and Marcial Guti\u00e9rrez Camarena. It was designed by architects Mario Pani, Armando Franco Rovira, Enrique del Moral, Eugenio Peschard, Ernesto G\u00f3mez Gallardo Arg\u00fcelles, Domingo Garc\u00eda Ramos, and others such as Mauricio De Maria y Campos who always showed great interest in participating in the project. Architects De Maria y Campos, Del Moral, and Pani were given the responsibility as directors and coordinators to assign each architect to each selected building or constructions which enclose the Estadio Ol\u00edmpico Universitario, about 40 schools and institutes, the Cultural Center, an ecological reserve, the Central Library, and a few museums. It was built during the 1950s on an ancient solidified lava bed to replace the scattered buildings in downtown Mexico City, where classes were given. It was completed in 1954, and is almost a separate region within Mexico City, with its own regulations, councils, and police (to some extent), in a more fundamental way than most universities around the world.\n\nApart from University City (Ciudad Universitaria), UNAM has several campuses in the Metropolitan Area of Mexico City (Acatl\u00e1n, Arag\u00f3n, Cuautitl\u00e1n, Iztacala, and Zaragoza), as well as many others in several locations across Mexico (in Santiago de Quer\u00e9taro, Morelia, M\u00e9rida, Sisal, Ensenada, Cuernavaca, Temixco and Leon), mainly aimed at research and graduate studies. Its School of Music, formerly the National School of Music, is located in Coyoac\u00e1n. Its Center of Teaching for Foreigners has a campus in Taxco, in the southern Mexican state of Guerrero, focusing in Spanish language and Mexican culture for foreigners, as well as locations in the upscale neighborhood of [Polanco] in central Mexico City.\n\nThe university has extension schools in the United States, and Canada, focusing on the Spanish language, English language, Mexican culture, and, in the case of UNAM Canada, French language: UNAM San Antonio, Texas; UNAM Los Angeles, California; UNAM Chicago, Illinois; Gatineau, Quebec; and Seattle, Washington.\n\nIt operates Centers for Mexican Studies and/or Centers of Teaching for Foreigners in Beijing, China (jointly with the Beijing Foreign Studies University); Madrid, Spain (jointly with the Cervantes Institute); San Jose, Costa Rica (jointly with the University of Costa Rica); London, United Kingdom (with King's College London); Paris, France (jointly with Paris-Sorbonne University); and Northridge, California, United States (jointly with California State University Northridge).\n\nUNAM is organized in schools or colleges, rather than departments. Both undergraduate and graduate studies are available. UNAM is also responsible for the Escuela Nacional Preparatoria (ENP) (National Preparatory School), and the Colegio de Ciencias y Humanidades (CCH) (Science and Humanities College), which consist of several high schools, in Mexico City. Counting ENEP, CCH, FES (Facultad de Estudios Superiores), higher-secondary, undergraduate and graduate students, UNAM has over 324,413 students, making it one of the world's largest universities.\n\nUNAM has excelled in many areas of research. The university houses many of Mexico's premiere research institutions. In recent years, it has attracted students and hired professional scientists from all over the world, most notably from Russia, India, and the United States, creating a unique and diverse scientific community.\n\nScientific research at UNAM is divided between faculties, institutes, centers, and schools, and covers a range of disciplines in Latin America. Some notable UNAM institutes include the Institute of Astronomy, the Institute of Biotechnology, the Institute of Nuclear Sciences, the Institute of Ecology, the Institute of Physics, Institute of Renewable Energies, the Institute of Cell Physiology, the Institute of Geophysics, the Institute of Engineering, the Institute of Materials Research, the Institute of Chemistry, the Institute of Biomedical Sciences, and the Applied Mathematics and Systems Research Institute.\n\nResearch centers tend to focus on multidisciplinary problems particularly relevant to Mexico and the developing world, most notably, the Center for Applied Sciences and Technological Development, which focuses on connecting the sciences to real-world problems (e.g., optics, nanosciences), and Center for Energy Research, which conducts world-class research in alternative energies.", "doc_id": "8e77b5e2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Alternative_minimum_tax", "document": "The alternative minimum tax (AMT) is a tax imposed by the United States federal government in addition to the regular income tax for certain individuals, estates, and trusts. As of tax year 2018, the AMT raises about $5.2 billion, or 0.4% of all federal income tax revenue, affecting 0.1% of taxpayers, mostly in the upper income ranges.\n\nAn alternative minimum taxable income (AMTI) is calculated by taking the ordinary income and adding disallowed items and credits such as state and local tax deductions, interest on private-activity municipal bonds, the bargain element of incentive stock options, foreign tax credits, and home equity loan interest deductions. This broadens the base of taxable items. Many deductions, such as mortgage home loan interest and charitable deductions, are still allowed under AMT. The AMT is then imposed on this AMTI at a rate of 26% or 28%, with a much higher exemption than the regular income tax.\n\nThe Tax Cuts and Jobs Act of 2017 (TCJA) reduced the fraction of taxpayers who owed the AMT from 3% in 2017 to 0.1% in 2018, including from 27% to 0.4% of those earning $200,000 to $500,000, from 61.9% to 2% of those earning $500,000 and $1,000,000.\n\nThe major reasons for the reduction of AMT taxpayers after TCJA include the capping of the state and local tax deduction (SALT) by the TCJA at $10,000, and a large increase in the exemption amount and phaseout threshold. A married couple earning $200,000 now requires over $50,000 of AMT adjustments to begin paying the AMT. The AMT previously applied in 2017 and earlier to many taxpayers earning from $200,000 to $500,000 because state and local taxes were fully deductible under the regular tax code but not at all under AMT. Despite the cap of the SALT deduction, the vast majority of AMT taxpayers paid less under the 2018 rules.\n\nThe AMT was originally designed to tax high-income taxpayers who used the regular tax system to pay little or no tax. Due to inflation and cuts in ordinary tax rates, many middle income taxpayers began to pay the AMT. The number of households owing AMT rose from 200,000 in 1982 to 5.2 million in 2017, but was reduced back to 200,000 in 2018 by the TCJA. After the expiry of the TCJA in 2025, the number of AMT taxpayers is expected to rise to 7 million in 2026.\n\nA predecessor \"minimum tax\" was enacted by the Tax Reform Act of 1969 and went into effect in 1970. Treasury Secretary Joseph Barr prompted the enactment action with an announcement that 155 high-income households had not paid a dime of federal income taxes. The households had taken advantage of so many tax benefits and deductions that they had reduced their tax liabilities to zero. Congress responded by creating an add-on tax on high-income households, equal to 10% of the sum of tax preferences in excess of $30,000 plus the taxpayer's regular tax liability.\n\nThe AMT has undergone several changes since 1969. The most significant of those, according to the Joint Committee on Taxation, occurred under the Reagan era Tax Equity and Fiscal Responsibility Act of 1982. The law changed the AMT from an add-on tax to its current form: a parallel tax system. The current structure of the AMT reflects changes that were made by the 1982 law. However, participation and revenues from the AMT temporarily plummeted after the 1986 changes. Congress made other notable, but less significant, changes to the law in 1978, 1982, and 1986.\n\nFurther significant changes occurred as a result of the Omnibus Budget Reconciliation Acts of 1990 and 1993, which raised the AMT rate to 24% from the prior level of 21% and then to 26% and 28% for individual filers with incomes that exceeded $175,000. Now, some taxpayers who do not have very high incomes or participate in numerous special tax benefits and/or activities will pay the AMT.\n\nFor years since then, Congress had passed one-year \"patches\" aimed at minimizing the impact of the tax. While not automatically indexed for inflation until a change in the law in early 2013, the exemption had been increased by Congress many times. In addition, the tax rate was increased for individuals effective 1991 and 1993, and the tax was limited for capital gains and qualifying dividends in 2003.\n\nFor the 2007 tax year, the patch was passed on December 20, 2007, but only after the IRS had already designed its forms for 2007. The IRS had to reprogram its forms to accommodate the law change.", "doc_id": "8e77b6b4-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Supernatural_(season_11)", "document": "The eleventh season of Supernatural, an American dark fantasy television series created by Eric Kripke, premiered on The CW on October 7, 2015 and concluded on May 25, 2016. The season consisted of 23 episodes and aired on Wednesdays at 9:00 pm (ET). This is the fourth and final season with Jeremy Carver as showrunner. The season was released on DVD and Blu-ray in region 1 on September 6, 2016. The eleventh season had an average viewership of 1.78 million U.S. viewers. The season follows Sam and Dean who, after killing Death with his own scythe to get rid of the Mark of Cain, releases an all powerful creature called The Darkness, who threatens to destroy everything in existence.\n\nIn the aftermath in the season 10 finale, the Darkness being released, Sam and Dean meet a young sheriff's deputy named Jenna who claims people have been going rabid and killing each other. A man whose wife has died has a newborn baby with him but he is infected. He insists on giving the baby to Jenna and the Winchesters, naming her Amara before he dies. Dean is haunted by a vision of The Darkness telling him he set her free and they are now linked into always helping each other. Dean wants to kill the infected and escape with the baby but Sam wants to try and cure them. Sam acts as a diversion allowing Dean and Jenna to escape, though Sam gets infected. Castiel struggles to control himself from inflicting more violence due to Rowena's attack dog spell, and is captured by two angels when he begs for help. Crowley regroups after narrowly escaping Castiel's attempt on his life, hearing that The Darkness has set off ancient alarms in both Heaven and Hell. As Jenna is changing Amara's diaper, she sees the Mark of Cain on her left shoulder.\n\nDean takes Jenna to her grandmother's home and leaves to help Sam. However, Amara begins levitating her toy blocks and Jenna's religious grandmother calls an exorcist while Jenna calls Dean. Dean arrives to find the \"exorcist\" to be Crowley who explains that he senses an ancient darkness in the child. At the same time, Jenna visits Amara and then suddenly murders her grandmother. Investigating the noise, Dean finds the Mark of Cain on Amara and remembering it on the woman from his vision, realizes that Amara is the Darkness. Confronting Jenna, Crowley realizes that she is now soulless, Amara having consumed her soul. Dean and Jenna fight before Crowley kills her. Crowley reveals he intends to use Amara for his own purposes, but Dean incapacitates him only to find Amara, now grown into a young girl, gone. Crowley later approaches Amara with people for her to feed on. At the same time, two angels torture Castiel before Hannah saves him. When he asks about the location of the Winchesters, Castiel realizes it was a ruse to get information from him. The other two angels try to hack his mind, but Rowena's spell gives him the strength to break free and fight back. Castiel kills the other two angels, but not before they kill Hannah. An infected Sam works to find a cure to the Darkness' poisoning with little luck. He encounters a Reaper who informs him that he and Dean will be thrown into a void when they die; that he is \"unclean in a biblical sense.\" As a result, Sam researches purifications from the Bible and finds a reference to holy oil. Using holy fire, Sam is able to cure himself and then save the remaining people in the town. Returning to the bunker, he and Dean find Castiel begging for help.\n\nSam and Dean search for Rowena to cure Castiel as well as begin to try to find Metatron before the other angels do, for information on The Darkness. A demon tries to kill Rowena while she is attempting and failing to recruit witches for her Mega Coven, leading the Winchesters to her. Though they take the Codex from her, she has hidden The Book of the Damned. Castiel loses control as the attack dog spell takes over him and goes rogue. While searching for Castiel, Rowena reveals the deal Sam made with her to kill Crowley if she removed the Mark of Cain from Dean, though Dean understands. Dean saves a woman from Castiel and is attacked by the angel. Rowena restores Castiel to normal and escapes. A low level demon and angel commiserate at a bar that the leaders of Heaven and Hell don't appear to be doing anything about The Darkness. Meanwhile Crowley is raising Amara and even he is unnerved by her power. Amara doesn't seem interested in a world of pure evil but she feeds on enough demons to make herself a teenager and demands Crowley bring her more.\n\nSupernatural was renewed by The CW for an eleventh season on January 11, 2015. Jensen Ackles directed the first-produced episode of the season, titled \"The Bad Seed\", which was the third-aired episode. Emily Swallow was cast in a recurring role in July 2015, portraying Amara, a femme fatale. The season features a bottle episode, titled \"Baby\", in which the entire episode takes place inside the Impala. Richard Speight Jr., who has a recurring role on the series as the Archangel Gabriel, directed the eighth episode of the season. In May 2016, it was announced that Jeremy Carver would be leaving the series, and that Robert Singer and Andrew Dabb would take over the role of showrunner for the twelfth season.\n\nThe review aggregator website Rotten Tomatoes gives the 11th season a 90% approval rating based on 10 reviews, with an average score of 7.46/10. The critics consensus reads, \"It may not rewrite the Supernatural playbook, but by introducing an enthralling new threat, this season becomes another high-stakes outing for the Winchesters.\"", "doc_id": "8e77b79a-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/IBM_System/360", "document": "The IBM System/360 (S/360) is a family of mainframe computer systems that was announced by IBM on April 7, 1964, and delivered between 1965 and 1978. It was the first family of computers designed to cover both commercial and scientific applications and to cover a complete range of applications from small to large. The design distinguished between architecture and implementation, allowing IBM to release a suite of compatible designs at different prices. All but the only partially compatible Model 44 and the most expensive systems use microcode to implement the instruction set, which features 8-bit byte addressing and binary, decimal, and hexadecimal floating-point calculations.\n\nThe System/360 family introduced IBM's Solid Logic Technology (SLT), which packed more transistors onto a circuit card, allowing more powerful but smaller computers to be built.\n\nThe slowest System/360 model announced in 1964, the Model 30, could perform up to 34,500 instructions per second, with memory from 8 to 64 KB. High-performance models came later. The 1967 IBM System/360 Model 91 could execute up to 16.6 million instructions per second. The larger 360 models could have up to 8 MB of main memory, though that much main memory was unusual\u2014a large installation might have as little as 256 KB of main storage, but 512 KB, 768 KB or 1024 KB was more common. Up to 8 megabytes of slower (8 microsecond) Large Capacity Storage (LCS) was also available for some models.\n\nThe IBM 360 was extremely successful in the market, allowing customers to purchase a smaller system with the knowledge they would be able to move to larger ones if their needs grew, without reprogramming application software or replacing peripheral devices. Its design influenced computer design for years to come; many consider it one of the most successful computers in history.\n\nThe chief architect of System/360 was Gene Amdahl, and the project was managed by Fred Brooks, responsible to Chairman Thomas J. Watson Jr. The commercial release was piloted by another of Watson's lieutenants, John R. Opel, who managed the launch of IBM\u2019s System 360 mainframe family in 1964.\n\nApplication-level compatibility (with some restrictions) for System/360 software is maintained to the present day with the System z mainframe servers.\n\nBinary arithmetic and logical operations are performed as register-to-register and as memory-to-register/register-to-memory as a standard feature. If the Commercial Instruction Set option was installed, packed decimal arithmetic could be performed as memory-to-memory with some memory-to-register operations. The Scientific Instruction Set feature, if installed, provided access to four floating-point registers that could be programmed for either 32-bit or 64-bit floating-point operations. The Models 85 and 195 could also operate on 128-bit extended-precision floating-point numbers stored in pairs of floating-point registers, and software provided emulation in other models. The System/360 used an 8-bit byte, 32-bit word, 64-bit double-word, and 4-bit nibble. Machine instructions had operators with operands, which could contain register numbers or memory addresses. This complex combination of instruction options resulted in a variety of instruction lengths and formats.\n\nMemory addressing was accomplished using a base-plus-displacement scheme, with registers 1 through F (15). A displacement was encoded in 12 bits, thus allowing a 4096-byte displacement (0\u20134095), as the offset from the address put in a base register.\n\nRegister 0 could not be used as a base register nor as an index register (nor as a branch address register), as \"0\" was reserved to indicate an address in the first 4 KB of memory, that is, if register 0 was specified as described, the value 0x00000000 was implicitly input to the effective address calculation in place of whatever value might be contained within register 0 (or if specified as a branch address register, then no branch was taken, and the content of register 0 was ignored, but any side effect of the instruction was performed).\n\nThis specific behavior permitted initial execution of an interrupt routines, since base registers would not necessarily be set to 0 during the first few instruction cycles of an interrupt routine. It isn't needed for IPL (\"Initial Program Load\" or boot), as one can always clear a register without the need to save it.", "doc_id": "8e77b89e-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Santino_Marella", "document": "Anthony Carelli (born March 14, 1974) is a Canadian judoka and semi-retired professional wrestler. He is known for his 11-year tenure with WWE where he wrestled under the ring name Santino Marella. He is the founder of and instructor at Battle Arts Academy, a martial arts and professional wrestling training facility in Mississauga, Ontario, and the official ambassador of Judo Canada.\n\nCarelli was signed by World Wrestling Entertainment in 2005, being assigned to Ohio Valley Wrestling, WWE's farm territory. He made his debut on Raw during a live episode from Milan, Italy. Under the character of Santino Marella, a fan selected from the audience, he defeated the Intercontinental Champion Umaga, winning the title in his debut match. During the following years, he would win the Intercontinental title one more time, the United States Championship and the WWE Tag Team Championship. He was also involved in a storyline where he worked as Santina Marella, Santino's twin sister. Carelli retired in 2014 and left WWE the following year. After his release, he opened Battle Arts Academy and worked for Impact Wrestling. In 2017, he made sporadic matches on the independent circuit, as well as occasional appearances in WWE.\n\nAnthony Carelli is known for his humorous gimmick as Santino Marella, an Italian stereotype, often being involved in comedic segments, having several on-screen relationships with fellow wrestlers, as well as being crowned \"Miss WrestleMania\" at WrestleMania XXV disguised as \"Santina Marella\". His character won Carelli Wrestling Observer Newsletter's award for Best Gimmick in 2007 and 2008.\n\nAnthony Carelli was born in Mississauga, Ontario, to a family of Italian and M\u00e9tis descent. He attended St Basil's Catholic Elementary School, and later Philip Pocock Catholic Secondary School and Concordia University. Carelli trained in judo at the age of nine and also competed in high school wrestling, winning the Region of Peel Secondary School Athletic Association (ROPSSAA) Tournament, back-to-back as a junior and a senior.\n\nAnthony Carelli debuted as Santino Marella (a homage to WWE Hall of Famer Robert \"Gorilla Monsoon\" Marella) on the April 16, 2007 episode of Raw from Milan, Italy. He was presented as a fan that Vince McMahon selected as an opponent for Umaga. The unknown Marella scored a surprising upset, and won the WWE Intercontinental Championship with an assist from Bobby Lashley. The next day, WWE.com posted a profile on Marella, saying he was an Italian national who moved to Canada as a child and returned to his native country a few times each year to visit family. The profile claimed that he moved to the U.S. to pursue a wrestling career with WWE. Shortly after winning the Intercontinental title, Marella feuded with Chris Masters, narrowly retaining the title in his first defenses. On July 2, he dropped the title to Umaga, after defeating him on June 24, at Vengeance: Night of Champions, by disqualification.\n\nMarella then began a gradual heel turn and became increasingly jealous of his girlfriend, Maria. Over several weeks, they appeared together in a series of segments on Raw, including two \"game show\" skits, hosted by General Manager William Regal, which resulted in retired wrestler Ron Simmons winning a date with Maria, to Santino's dismay. Marella began a publicity campaign against the WWE Films production, The Condemned, as its DVD release neared. He was eventually confronted by the film's star, Steve Austin, who argued the film's merits before delivering a Stone Cold Stunner to Marella and hosing him and Maria with beer. During the Austin angle, Marella repeatedly mocked him and his catchphrases in humorously broken English, starting a new comedic trend in his gimmick. After a short angle with Jerry Lawler, which included Lawler hitting his signature fist drop maneuver after Marella lost a match to the returning Chris Jericho on the November 26 episode of Raw, Marella formed a tag team with Carlito.\n\nAt WrestleMania XXVI, Santino Marella competed in the 26-man battle royale dark match, won by Yoshi Tatsu. He then tried to form a tag team with Vladimir Kozlov, who repeatedly refused the offer. On the May 31 episode of Raw, Kozlov interfered in Marella's match, helping him win. On the July 19 Raw, they finally teamed to defeat William Regal and Zack Ryder. At Night of Champions, Marella and Kozlov wrestled a Tag Team Turmoil match for the Unified Tag Team Championship, won by Cody Rhodes and Drew McIntyre.\n\nOn the October 11 Raw, he defeated Zack Ryder to qualify for Team Raw at Bragging Rights, against Team SmackDown. He was the first of seven Team Raw members eliminated, pinned by Tyler Reks.[49] On the October 25 Raw, after Sheamus had called him an \"embarrassment\" for being the first man eliminated at Bragging Rights, Marella scored an upset victory over the former two-time WWE Champion. They wrestled twice more, both matches ending with John Morrison saving Marella from Sheamus' post-match assault.", "doc_id": "8e77b984-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Craven_Cottage", "document": "Craven Cottage is a football ground in Fulham, West London, England, which has been the home of Fulham F.C. since 1896. The ground's capacity is 22,384; the record attendance is 49,335, for a game against Millwall in 1938. Next to Bishop's Park on the banks of the River Thames, it was originally a royal hunting lodge and has a history dating back over 300 years.\n\nThe stadium has been also been used by the United States, Australia, Ireland, and Canada men's national football teams, and was formerly the home ground for rugby league club Fulham RLFC.\n\nThe original Cottage was built in 1780, by William Craven, the sixth Baron Craven and was located close to where the Johnny Haynes Stand is now. At the time, the surrounding areas were woods which made up part of Anne Boleyn's hunting grounds.\n\nThe Cottage was lived in by Edward Bulwer-Lytton (who wrote The Last Days of Pompeii) and other somewhat notable (and moneyed) persons until it was destroyed by fire in May 1888. Many rumours persist among Fulham fans of past tenants of Craven Cottage. Sir Arthur Conan Doyle, Jeremy Bentham, Florence Nightingale and even Queen Victoria are reputed to have stayed there, although there is no real evidence for this. Following the fire, the site was abandoned. Fulham had had 8 previous grounds before settling in at Craven Cottage for good. Therefore, The Cottagers have had 12 grounds overall (including a temporary stay at Loftus Road), meaning that only their former 'landlords' and rivals QPR have had more home grounds in British football. Of particular note, was Ranelagh House, Fulham's palatial home from 1886 to 1888.\n\nAn England v Wales match was played at the ground in 1907, followed by a rugby league international between England and Australia in 1911.\n\nOne of the club's directors Henry Norris, and his friend William Hall, took over Arsenal in the early 1910s, the plan being to merge Fulham with Arsenal, to form a \"London superclub\" at Craven Cottage. This move was largely motivated by Fulham's failure thus far to gain promotion to the top division of English football. There were also plans for Henry Norris to build a larger stadium on the other side of Stevenage Road but there was little need after the merger idea failed. During this era, the Cottage was used for choir singing and marching bands along with other performances, and Mass.\n\nIn 1933 there were plans to demolish the ground and start again from scratch with a new 80,000 capacity stadium. These plans never materialised mainly due to the Great Depression.\n\nOn 8 October 1938, 49,335 spectators watched Fulham play Millwall. It was the largest attendance ever at Craven Cottage and the record remains today, unlikely to be bettered as it is now an all-seater stadium with currently no room for more than 25,700. The ground hosted several football games for the 1948 Summer Olympics, and is one of the last extant that did.\n\nIt was not until after Fulham first reached the top division, in 1949, that further improvements were made to the stadium. In 1962 Fulham became the final side in the first division to erect floodlights. The floodlights were said to be the most expensive in Europe at the time as they were so modern. The lights were like large pylons towering 50 metres over the ground and were similar in appearance to those at the WACA. An electronic scoreboard was installed on the Riverside Terrace at the same time as the floodlights were installed and flagpoles flying the flags of all of the other first division teams were flown from them. Following the sale of Alan Mullery to Tottenham Hotspur in 1964 (for \u00a372,500) the Hammersmith End had a roof put over it at a cost of approximately \u00a342,500.\n\nAlthough Fulham were relegated, the development of Craven Cottage continued. The Riverside terracing, infamous for the fact that fans occupying it would turn their heads annually to watch The Boat Race pass, was replaced by what was officially named the 'Eric Miller Stand', Eric Miller being a director of the club at the time. The stand, which cost \u00a3334,000 and held 4,200 seats, was opened with a friendly game against Benfica in February 1972, (which included Eus\u00e9bio). Pel\u00e9 was also to appear on the ground, with a friendly played against his team Santos F.C. The Miller stand brought the seated capacity up to 11,000 out of a total 40,000. Eric Miller committed suicide five years later after a political and financial scandal, and had shady dealings with trying to move Fulham away from the Cottage. The stand is now better known as the Riverside Stand.", "doc_id": "8e77ba56-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Learning_styles", "document": "Learning styles refer to a range of theories that aim to account for differences in individuals' learning. Although there is ample evidence that individuals express personal preferences for how they prefer to receive information,\u200a few studies have found any validity in using learning styles in education.\u200a Many theories share the proposition that humans can be classified according to their \"style\" of learning, but differ in how the proposed styles should be defined, categorized and assessed. A common concept is that individuals differ in how they learn.\u200a\n\nThe idea of individualized learning styles became popular in the 1970s, and has greatly influenced education despite the criticism that the idea has received from some researchers. Proponents recommend that teachers have to run a needs analysis to assess the learning styles of their students and adapt their classroom methods to best fit each student's learning style. Critics say there is no consistent evidence that identifying an individual student's learning style and teaching for specific learning styles produces better student outcomes.\u200a Since 2012, learning styles have often been referred to as a \"neuromyth\" in education. There is evidence of empirical and pedagogical problems related to forcing learning tasks to \"correspond to differences in a one-to-one fashion\". Studies contradict the widespread \"meshing hypothesis\" that a student will learn best if taught in a method deemed appropriate for the student's learning style. However, a 2020 systematic review suggested that a majority (89%) of educators around the world continue to believe that the meshing hypothesis is correct.\n\nStudies further show that teachers cannot assess the learning style of their students accurately. In one study, students were asked to take an inventory on their learning style. After nearly 400 students completed the inventory, 70% didn't use study habits that matched their preferred learning method. Another piece of this study indicated that those students who used study methods that did match their preferred learning style didn't perform any better on tests.\n\nDavid A. Kolb's model is based on his experiential learning model, as explained in his book Experiential Learning. Kolb's model outlines two related approaches toward grasping experience: Concrete Experience and Abstract Conceptualization, as well as two related approaches toward transforming experience: Reflective Observation and Active Experimentation. According to Kolb's model, the ideal learning process engages all four of these modes in response to situational demands; they form a learning cycle from experience to observation to conceptualization to experimentation and back to experience. In order for learning to be effective, Kolb postulated, all four of these approaches must be incorporated.\n\nKolb's model gave rise to the Learning Style Inventory, an assessment method used to determine an individual's learning style. According to this model, individuals may exhibit a preference for one of the four styles: Accommodating, Converging, Diverging and Assimilating\u2014depending on their approach to learning in Kolb's experiential learning model.\n\nAlthough Kolb's model is widely used, a 2013 study pointed out that Kolb's Learning Style Inventory, among its other weaknesses, incorrectly dichotomizes individuals on the abstract/concrete and reflective/action dimensions of experiential learning (in much the same way as the Myers-Briggs Type Indicator does in a different context), and proposed instead that these dimensions be treated as continuous rather than dichotomous/binary variables.\n\nPeter Honey and Alan Mumford adapted Kolb's experiential learning model. First, they renamed the stages in the learning cycle to accord with managerial experiences: having an experience, reviewing the experience, concluding from the experience, and planning the next steps. Second, they aligned these stages to four learning styles named: Activist, Reflector, Theorist and Pragmatist.\n\nThese four learning styles are assumed to be acquired preferences that are adaptable, either at will or through changed circumstances, rather than being fixed personality characteristics. Honey and Mumford's Learning Styles Questionnaire (LSQ) is a self-development tool and differs from Kolb's Learning Style Inventory by inviting managers to complete a checklist of work-related behaviours without directly asking managers how they learn. Having completed the self-assessment, managers are encouraged to focus on strengthening underutilized styles in order to become better equipped to learn from a wide range of everyday experiences.", "doc_id": "8e77bb64-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/European_Union_Association_Agreement", "document": "A European Union Association Agreement or simply Association Agreement (AA) is a treaty between the European Union (EU), its Member States and a non-EU country that creates a framework for co-operation between them. Areas frequently covered by such agreements include the development of political, trade, social, cultural and security links.\n\nAssociation Agreements are broad framework agreements between the EU (or its predecessors) and its member states, and an external state which governs their bilateral relations. The provision for an association agreement was included in the Treaty of Rome, which established the European Economic Community, as a means to enable co-operation of the Community with the United Kingdom, which had retreated from the treaty negotiations at the Messina Conference of 1955. According to the European External Action Service, for an agreement to be classified as an AA, it must meet several criteria:\n\n1. The legal basis for [association agreements'] conclusion is Article 217 TFEU (former art. 310 and art. 238 TEC)\n2. Intention to establish close economic and political cooperation (more than simple cooperation);\n3. Creation of paritary bodies for the management of the cooperation, competent to take decisions that bind the contracting parties;\n4. Offering most favoured nation treatment;\n5. Providing for a privileged relationship between the EC and its partner;\n6. Since 1995 the clause on the respect of human rights and democratic principles is systematically included and constitutes an essential element of the agreement;\n7. In a large number of cases, the association agreement replaces a cooperation agreement thereby intensifying the relations between the partners.\n\nThe EU typically concludes Association Agreements in exchange for commitments to political, economic, trade, or human rights reform in a country. In exchange, the country may be offered tariff-free access to some or all EU markets (industrial goods, agricultural products, etc.), and financial or technical assistance. Most recently signed AAs also include a Free Trade Agreement (FTA) between the EU and the third country.\n\nAssociation Agreements have to be accepted by the European Union and need to be ratified by all the EU member states and the state concerned.\n\nAAs go by a variety of names (e.g. Euro-Mediterranean Agreement Establishing an Association, Europe Agreement Establishing an Association) and need not necessarily even have the word \"Association\" in the title. Some AAs contain a promise of future EU membership for the contracting state.\n\nThe first states to sign such agreements were Greece in 1961 and Turkey in 1963.\n\nIn recent history, such agreements have been signed as part of two EU policies, the Stabilisation and Association Process (SAp) and the European Neighbourhood Policy (ENP).\n\nThe countries of the western Balkans (official candidates Albania, Montenegro, North Macedonia, Serbia, and potential candidates Bosnia and Herzegovina and Kosovo) are covered by SAp. All six have \"Stabilisation and Association Agreements\" (SAA) with the EU in force.\n\nThe Eastern European neighbours of Armenia, Azerbaijan, Belarus, Georgia, Moldova, and Ukraine are all members of the Eastern Partnership and are covered by the ENP. While Russia has a special status with the EU-Russia Common Spaces instead of ENP participation.\n\nMeanwhile, the countries of the Mediterranean, (Algeria, Morocco, Egypt, Israel, Jordan, Lebanon, Libya, the Palestinian Authority, Syria, Tunisia) are also covered by the ENP and seven of the Mediterranean states have a \"Euro-Mediterranean Agreement establishing an Association\" (EMAA) with the EU in force, while Palestine has an interim EMAA in force. Syria initialed an EMAA in 2008, however signing has been deferred indefinitely. Negotiations for a Framework Agreement with the remaining state, Libya, have been suspended.\n\nMoldova and Ukraine have Association Agreements in force. Armenia completed negotiations for a AA in 2013 but decided not to sign the agreement and later signed a revised CEPA with the EU in 2017. Azerbaijan was also negotiating an AA, but did not conclude one.\n\nBoth the SAA and ENP are based mostly on the EU's acquis communautaire and its promulgation in the co-operating states legislation. Of course, the depth of the harmonisation is less than full EU members and some policy areas may not be covered (depending on the particular state).\n\nIn addition to these two policies, AAs with free-trade agreement provisions have been signed with other states and trade blocs including Chile and South Africa.", "doc_id": "8e77bc5e-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Gandalf", "document": "Gandalf is a protagonist in J. R. R. Tolkien's novels The Hobbit and The Lord of the Rings. He is a wizard, one of the Istari order, and the leader of the Fellowship of the Ring. Tolkien took the name \"Gandalf\" from the Old Norse \"Catalogue of Dwarves\" (Dvergatal) in the V\u00f6lusp\u00e1.\n\nAs a wizard and the bearer of one of the Three Rings, Gandalf has great power, but works mostly by encouraging and persuading. He sets out as Gandalf the Grey, possessing great knowledge and travelling continually. Gandalf is focused on the mission to counter the Dark Lord Sauron by destroying the One Ring. He is associated with fire; his ring of power is Narya, the Ring of Fire. As such, he delights in fireworks to entertain the hobbits of the Shire, while in great need he uses fire as a weapon. As one of the Maiar, he is an immortal spirit from Valinor, but his physical body can be killed.\n\nIn The Hobbit, Gandalf assists the dwarves and the hobbit Bilbo with their quest to retake the Lonely Mountain from Smaug the dragon, but leaves them to urge the White Council to expel Sauron from his fortress of Dol Guldur. In the quest, Bilbo finds a magical ring. The expulsion succeeds, but in The Lord of the Rings, Gandalf reveals that Sauron's retreat was only a feint, as he soon reappeared in Mordor. Gandalf further explains that, after years of investigation, he is sure that Bilbo's ring is the One Ring that Sauron needs to dominate the whole of Middle-earth. The Council of Elrond creates the Fellowship of the Ring, with Gandalf as its leader, to defeat Sauron by destroying the Ring. He takes them south and through the Misty Mountains, but is killed fighting a Balrog, an evil spirit-being, in the underground realm of Moria. After he dies, he is sent back from Valinor to Middle-earth to complete his mission as Gandalf the White. He reappears in dazzling light to three of the Fellowship and helps to counter the enemy in Rohan, then in Gondor, and finally at the Black Gate of Mordor, in each case largely by offering guidance. When victory is complete, he crowns Aragorn as King before leaving Middle-earth for ever to return to Valinor.\n\nTolkien once described Gandalf as an angel incarnate; later, both he and other scholars have likened Gandalf to the Norse god Odin in his \"Wanderer\" guise. Others have described Gandalf as a guide-figure who assists the protagonist, comparable to the Cumaean Sibyl who assisted Aeneas in Virgil's The Aeneid, or to Virgil himself in Dante's Inferno. Scholars have likened his return in white to the transfiguration of Christ; he is further described as a prophet, representing one element of Christ's threefold office of prophet, priest, and king, where the other two roles are taken by Frodo and Aragorn.\n\nThe Gandalf character has been featured in radio, television, stage, video game, music, and film adaptations, including Ralph Bakshi's 1978 animated film. His best-known portrayal is by Ian McKellen in Peter Jackson's 2001\u20132003 The Lord of the Rings film series, where the actor based his acclaimed performance on Tolkien himself. McKellen reprised the role in Jackson's 2012\u20132014 film series The Hobbit.\n\nTolkien describes Gandalf as the last of the wizards to appear in Middle-earth, one who \"seemed the least, less tall than the others, and in looks more aged, grey-haired and grey-clad, and leaning on a staff\". Yet the Elf C\u00edrdan who met him on arrival nevertheless considered him \"the greatest spirit and the wisest\" and gave him the Elven Ring of Power called Narya, the Ring of Fire, containing a \"red\" stone for his aid and comfort. Tolkien explicitly links Gandalf to the element fire later in the same essay: Warm and eager was his spirit (and it was enhanced by the ring Narya), for he was the Enemy of Sauron, opposing the fire that devours and wastes with the fire that kindles, and succours in wanhope and distress; but his joy, and his swift wrath, were veiled in garments grey as ash, so that only those that knew him well glimpsed the flame that was within. Merry he could be, and kindly to the young and simple, yet quick at times to sharp speech and the rebuking of folly; but he was not proud, and sought neither power nor praise ... Mostly he journeyed tirelessly on foot, leaning on a staff, and so he was called among Men of the North Gandalf 'the Elf of the Wand'. For they deemed him (though in error) to be of Elven-kind, since he would at times work wonders among them, loving especially the beauty of fire; and yet such marvels he wrought mostly for mirth and delight, and desired not that any should hold him in awe or take his counsels out of fear. ... Yet it is said that in the ending of the task for which he came he suffered greatly, and was slain, and being sent back from death for a brief while was clothed then in white, and became a radiant flame (yet veiled still save in great need).\n\nThe wizards arrived in Middle-earth separately, early in the Third Age; Gandalf was the last, landing in the Havens of Mithlond. He seemed the oldest and least in stature, but C\u00edrdan the Shipwright felt that he was the greatest on their first meeting in the Havens, and gave him Narya, the Ring of Fire. Saruman, the chief Wizard, learned of the gift and resented it. Gandalf hid the ring well, and it was not widely known until he left with the other ring-bearers at the end of the Third Age that he, and not C\u00edrdan, was the holder of the third of the Elven-rings.\n\nGandalf's relationship with Saruman, the head of their Order, was strained. The Wizards were commanded to aid Men, Elves, and Dwarves, but only through counsel; they were forbidden to use force to dominate them, though Saruman increasingly disregarded this.\n\nGandalf meets with Bilbo in the opening of The Hobbit. He arranges for a tea party, to which he invites the thirteen dwarves, and thus arranges the travelling group central to the narrative. Gandalf contributes the map and key to Erebor to assist the quest. On this quest Gandalf acquires the sword, Glamdring, from the trolls' treasure hoard. Elrond informs them that the sword was made in Gondolin, a city long ago destroyed, where Elrond's father lived as a child.\n\nAfter escaping from the Misty Mountains pursued by goblins and wargs, the party is carried to safety by the Great Eagles. Gandalf then persuades Beorn to house and provision the company for the trip through Mirkwood. Gandalf leaves the company before they enter Mirkwood, saying that he had pressing business to attend to.\n\nHe turns up again before the walls of Erebor disguised as an old man, revealing himself when it seems the Men of Esgaroth and the Mirkwood Elves will fight Thorin and the dwarves over Smaug's treasure. The Battle of Five Armies ensues when hosts of goblins and wargs attack all three parties. After the battle, Gandalf accompanies Bilbo back to the Shire, revealing at Rivendell what his pressing business had been: Gandalf had once again urged the council to evict Sauron, since quite evidently Sauron did not require the One Ring to continue to attract evil to Mirkwood. Then the Council \"put forth its power\" and drives Sauron from Dol Guldur. Sauron had anticipated this, and had feigned a withdrawal, only to reappear in Mordor.", "doc_id": "8e77bdda-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/The_Amazing_Race_19", "document": "The Amazing Race 19 is the nineteenth installment of the American reality television show The Amazing Race. This season featured eleven teams of two competing in a race around the world.\n\nThe 19th season premiered on CBS on September 25, 2011, and the finale aired on December 11, 2011.\n\nEngaged couple Ernie Halvorsen and Cindy Chiang were the winners of this season, while dating couple Jeremy Cline and Sandy Draghi finished in second place, and married couple Marcus Pollard and Amani Pollard finished in third.\n\nThis season traveled a little over 35,000 miles (56,000 km) to 20 cities across four continents. This season included first-time visits to Denmark, Indonesia, Malawi, and Belgium. Filming started on June 18, 2011, with teams seen leaving Los Angeles International Airport and heading to Taiwan. The starting line was located at the Hsi Lai Temple in the foothills of Hacienda Heights, California. As with the previous season, racers had a task they had to perform before receiving tickets to their first destination. American film crews were also spotted in Hiller\u00f8d, Denmark.\n\nTwo new game elements were introduced in this season. Leg 1 introduced the Hazard, a penalty that one team incurred for being the last team to finish the starting line task. According to Phil Keoghan, the Hazard was added to \"test people's mental strength out of the gate\", and claimed that it had a \"rolling effect\" throughout the rest of the season. Leg 2 was the first time in which two teams were eliminated at the pit stop.\n\nDuring the first leg, Kaylani Paliotta lost her passport at a gas station while en route from Hsi Lai Temple to Los Angeles International Airport. Kaylani & Lisa returned to the gas station to search for the passport, but could not find it and opted to proceed to the airport hoping that another racer had picked it up. The camera crew accompanying the team had seen the dropped passport, but could not act on it, and instead informed production of the situation. Production prepared to conduct an impromptu elimination and Phil Keoghan rushed to the airport. The passport was found by two bystanders who had previously helped another team at the gas station. After posting about the incident on Twitter, a fan of the show advised them to take the passport to the airport, and they were able to return it to Kaylani before her scheduled flight.\n\nLeg 5 was supposed to take place in Laos. However, a monsoon caused heavy flooding in the country and forced the production team to construct an additional leg in Thailand instead. Laos would eventually be visited later in The Amazing Race 31.\n\nThe cast included former Survivor winners and dating couple Ethan Zohn (winner of Survivor: Africa) and Jenna Morasca (winner of Survivor: The Amazon); Zac Sunderland, the first person under 18 to sail solo around the world, retired NFL tight end Marcus Pollard and his wife, Amani; and former Olympic snowboarders Andy Finch and Tommy Czeschin.\n\nWinners Ernie & Cindy were married on March 10, 2012. Ron & Bill, Kaylani & Lisa, Liz & Marie, Justin & Jennifer, Laurence & Zac, Amani & Marcus, and Jeremy & Sandy all attended the wedding. The newlyweds traveled to Fiji on the trip which they had won on Leg 8 for their honeymoon.\n\nOne of the competitors, Bill Alden, died on June 10, 2016, at the age of 69 from pancreatic cancer.", "doc_id": "8e77be84-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Assassination_of_Admiral_Coligny", "document": "The assassination of Admiral Coligny on 24 August 1572 would prelude one of the critical events of the French Wars of Religion, the Massacre of Saint Bartholomew. The figures responsible for first the attempt on his life on 22 August and then his execution on 24 August have long been debated by historians. Coligny's feud with Henry I, Duke of Guise throughout the 1560s and his desire to bring France into conflict with Spain are often cited as key factors. The attempt on his life took place in the wake of the marriage between Navarre and Margaret of Valois a high-profile affair intended as a component of the Peace of Saint-Germain-en-Laye by Catherine de'Medici and her son Charles IX.\n\nColigny little trusted the promises of safety from the crown for the prospect of returning to court, and based himself out of La Rochelle from late 1570 into 1571. In 1571 he married Jacqueline de Montbel d'Entremont giving him territorial interest in Savoyard lands. Both the duke of Savoy and the king of Spain were convinced he was plotting against them. Enthusiasm for the planned marriage between Navarre and Margaret of Valois designed as a method to seal the Peace of Saint-Germain was mixed among the Huguenot leadership. Albret was ambiguous on the prospect, while Coligny was opposed, fearing it could withdraw Navarre from the Bourbon-Ch\u00e2tillon orbit with his abjuration. After long resisting coming to the French court Albret arrived in March 1572, and the marriage contract was signed on 11 March. As the wedding drew closer, many Huguenot nobles arrived at the capital for the celebrations. Not easily able to miss such an important event in cementing the peace as well as such an elite marriage. The Guise and their associated clientele took up residence in the H\u00f4tel de Guise.\n\nIn September 1571 Coligny arrived at court, being held at Blois, received a generous pension from the king, totalling 150,000 livres and was readmitted onto the king\u2019s council. Between this re-admission and August 1572, Coligny would however only be present at court for 5 weeks, and his influence would be highly limited, despite the fears of the militant Catholics that he was driving the king\u2019s policy. During his stay at court, the Guise took leave of the king. In June 1572 Coligny again presented himself at court, accompanied by 300 cavaliers.\n\nAt the climax of the siege of Orl\u00e9ans in 1563, the Duke of Guise had been assassinated by the Protestant assassin Poltrot de M\u00e9r\u00e9. Under torture Poltrot would implicate Coligny in the assassination, though his story would change with each telling, and several times he would deny Coligny's involvement. Coligny who was fighting in Normandy, denounced these accusations, demanding a right to cross examine Poltrot in Parlement to clear his name. He would however be hurriedly executed to pre-empt the amnesty clause of the Edict of Amboise.\n\nPoltrots testimony would be a lightning rod for Guise anger. Meanwhile Cond\u00e9 and Montmorency rallied to Coligny's defence at council. The Guise family launched a private suite on 26 April 1563. To ensure an appropriately partisan justice was selected to handle their suit they made a show of force at the Parlement session when the decision was being made, with a hundred armed men, succeeding in getting a partisan candidate. The king would however evoke the case to the royal council, removing the Parlement from having jurisdiction. This done he would then arrange for the judgement to be suspended until he reached his majority. Attempting to prove not only Guise had the power to make shows of force, Coligny entered Paris with a large host of armed supporters in November. Catherine summoned both to the Louvre on 6 December in a desperate bid to get them to both calm down. This would be in vain, and the two sides would engage in various petty acts of violence over the coming weeks, culminating in the murder of a guard member. On 5 January the king tried to take more definitive action to crush the feud, suspending judgement for a further three years.\n\nFrustrated at the failure of their strategy, the Guise altered their approach, seeking to build a non confessional base to prosecute their feud, appealing to Cond\u00e9 by highlighting how Montmorency and the Ch\u00e2tillon were upstart houses compared to true princes like them. Cond\u00e9 on side, Lorraine planned an armed entry into Paris over the protestations of the governor Marshal Montmorency who tried to tell them arms were not allowed in the city of Paris. Lorraine and the young Guise entered with a large retinue under arms, clashing with the forces of Montmorency in several street skirmishes in which they came off the worse, with several dead. Humiliated Lorraine and Guise retreated to their residence where they were besieged by taunts even from the Catholic Parisians.\n\nIn early 1566 Lorraine travelled to where the court was staying at Moulins to appeal for proceedings against Coligny. Continuing his strategy from the prior year he characterised himself as a champion of the rights of princes, but the various princes at court were uninterested and voted his proposals down. This allowed Charles to compel Lorraine and Coligny to exchange the kiss of peace. The young Guise would however refuse to appear at Moulins, and further refuse to sign anything that implied Coligny's innoncence. Guise had challenged both Coligny, and Marshal Montmorency to duels, however they felt confident in ignoring his challenges. The king followed the kiss of peace staging by sending out an edict in which Coligny's innocence was declared on 29 January 1566.\n\nIn November 1571 it was reported the Guise were gathering funds and followers in Champagne. The Huguenot nobility rallied round Coligny who was at Ch\u00e2tillon, offering their support if conflict broke out again.\n\nIn January 1572 the Guise petitioned for the withdrawal of the arr\u00eat issued at Moulins on their feud with Coligny. On 14 January Guise, Aumale and Mayenne entered Paris with a strong escort of 500 men in another show of force. In another display of bluster he requested the kings permission to fight Coligny in single combat. In March Charles again cleared Coligny of involvement in the assassination of the duke of Guise Satisfied that he had not gone down without an attempt to protect his honour, in May he was persuaded by the king to abide by the terms of the Moulins agreement that he had avoided, delighting the king.", "doc_id": "8e77bfce-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Butriptyline", "document": "Butriptyline, sold under the brand name Evadyne among others, is a tricyclic antidepressant (TCA) that has been used in the United Kingdom and several other European countries for the treatment of depression but appears to no longer be marketed. Along with trimipramine, iprindole, and amoxapine, it has been described as an \"atypical\" or \"second-generation\" TCA due to its relatively late introduction and atypical pharmacology. It was very little-used compared to other TCAs, with the number of prescriptions dispensed only in the thousands.\n\nButriptyline was used in the treatment of depression. It was usually used at dosages of 150\u2013300 mg/day.\n\nButriptyline is closely related to amitriptyline, and produces similar effects as other TCAs, but its side effects like sedation are said to be reduced in severity and it has a lower risk of interactions with other medications.\n\nButriptyline has potent antihistamine effects, resulting in sedation and somnolence. It also has potent anticholinergic effects, resulting in side effects like dry mouth, constipation, urinary retention, blurred vision, and cognitive/memory impairment. The drug has relatively weak effects as an alpha-1 blocker and has no effects as a norepinephrine reuptake inhibitor, so is associated with little to no antiadrenergic and adrenergic side effects.\n\nIn vitro, butriptyline is a strong antihistamine and anticholinergic, moderate 5-HT2 and \u03b11-adrenergic receptor antagonist, and very weak or negligible monoamine reuptake inhibitor. These actions appear to confer a profile similar to that of iprindole and trimipramine with serotonin-blocking effects as the apparent predominant mediator of mood-lifting efficacy.\n\nHowever, in small clinical trials, using similar doses, butriptyline was found to be similarly effective to amitriptyline and imipramine as an antidepressant, despite the fact that both of these TCAs are far stronger as both 5-HT2 antagonists and serotonin\u2013norepinephrine reuptake inhibitors. As a result, it may be that butriptyline has a different mechanism of action, or perhaps functions as a prodrug in the body to a metabolite with different pharmacodynamics.", "doc_id": "8e77c096-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Traditional_games_in_the_Philippines", "document": "Traditional Filipino games or indigenous games in the Philippines are games that have been played across multiple generations, usually using native materials or instruments. In the Philippines, due to limited resources for toys, children usually invent games without needing anything but players.There are different kinds of Philippine Traditional Games that are suited for kids, and the games also stand as one of the different culture and/or traditional games of the Philippines. These games are not only fun to play, but these games are also good for you. This is because different games require different skills. These games are also an important part in Filipino culture.\n\nLaro ng Lahi was coined and popularized by the Samahang Makasining (commonly known \"Makasining\") with the help of the National Commission for Culture and the Arts, Philippine Local Government Units, other organizations and other institutions. Imparting these Filipino games to young Filipinos is one of the organization's main activities. The Makasining also created time-based scoring for patintero, syatong, dama, lusalos and holen butas.\n\nTraditional Philippine games, such as luksong baka, patintero, piko, and tumbang preso are played primarily as children's games. The yo-yo, a popular toy in the Philippines, was introduced in its modern form by Pedro Flores with its name coming from the Ilocano language.\n\nTraditional Filipino games are usually played by children of younger age outdoors together with their neighbor and friends. The games have no definite rules nor any strict regulations. Different communities and regions have varying versions of the games that are agreed upon between themselves. Most games and matches have two-team gameplay in which players can divide themselves into a reasonably certain number, usually predetermined by two separate team leaders first playing Jack 'n' poy then selecting a teammate after each match. Another common variation of creating two teams is by 'win-lose' in which each player will pick another person to play Jack 'n' poy with and then grouping the winners and losers. Filipino games number more than thirty-eight.\n\nKalahoyo (lit. hole-in) is an outdoor game played by two to ten players. Accurate targeting is the critical skill, because the objective is to hit the anak (small stones or objects) with the use of the pamato (big, flat stone), trying to send it to the hole.\n\nA small hole is dug in the ground, and a throwing line is drawn opposite the hole (approx. 5 to 6 metres (16 to 20 ft) away from the hole). A longer line is drawn between the hole and the throwing line. Each player has a pamato and an anak. All the anak are placed on the throwing line, and players try to throw their pamato into the hole from the throwing line. The player whose pamato is in the hole or nearest the hole gets the chance for the first throw. Using the pamato, the first thrower tries to hit the anak, attempting to send it to the hole. Players take turns in hitting their anak until one of them knocks it into the hole, with the players taking turns. The game goes on until only one anak is left outside the hole. Players who get their anak inside the hole are declared winners, while the alila (loser) or muchacho is the one whose anak is left outside the hole. The Alila or Muchacho is \"punished\" by all the winner/s as follows:\n\nWinners stand at the throwing line with their anak beyond line A-B (longer line between hole and throwing line). The winners hit their anak with their pamato. The muchacho picks up the pamato and returns it to the owner. The winners repeat throwing as the muchacho keeps on picking up and returning the pamato as punishment. Winners who fail to hit their respective anak stop throwing. The objective is to tire the loser as punishment. When all are through, the game starts again.\n\nTwo people hold the ends of a stretched garter horizontally while the others attempt to cross over it. The goal is to cross without tripping on the garter. The game starts with the garter at ankle. while with each round, the garter's height is raised. The higher rounds demand dexterity, and the players generally leap with their feet first in the air, so their feet cross over the garter, and they end up landing on the other side. As the height increases, cartwheels to \"cross\" the garter are allowed. Additionally, they can add a rule (only allowed to be used at lower than the head) to only cross over with both legs and not separately.\n\nDerived from the phrase \"hole in,\" players hold the ball or marble called holen in their hand. They throw it to hit another players ball out of the playing area. Holen is a variation on marbles in the United States. It is played in a more precise way by tucking the marble with the player's middle finger, with the thumb under the marble, and the fourth finger used to stabilize the marble. Players aim at grouped marbles inside a circle and flick the marble from their fingers. Anything they hit out of the circle is theirs. Whoever obtains the most marbles wins the game. Players (manlalaro) can also win the game by eliminating their opponents by hitting another player's marble.\n\nAnother version of this game requires three holes lined up in the ground separated by some distance. Each player tries to complete a circuit, travelling to all the holes and back in order. Players decide on the starting line and the distance between holes. The first to complete the circuit wins the game. Players can knock other player's holen (marble) away using their own marble. Generally the distance between holes allows for several shots to arrive at the next hole. The players shoots from where the prior shot landed. A variant of this game needs players to require their holen to pass back to the starting line.", "doc_id": "8e77c186-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Loyola_University_New_Orleans", "document": "Loyola University New Orleans is a private Jesuit university in New Orleans, Louisiana. Originally established as Loyola College in 1904, the institution was chartered as a university in 1912. It bears the name of the Jesuit founder, Saint Ignatius of Loyola, and is a member of the Association of Jesuit Colleges and Universities.\n\nLoyola University in New Orleans was founded by the Society of Jesus in 1904 as Loyola College on a section of the Foucher Plantation bought by the Jesuits in 1886. A young Jesuit, Fr. Albert Biever, was given a nickel for street car fare and told by his Jesuit superiors to travel Uptown on the St. Charles Streetcar and found a university. As with many Jesuit schools, it contained both a college and preparatory academy. The first classes of Loyola College were held in a residence behind Most Holy Name of Jesus Church. Fr. Biever was the first president. The first of Loyola's permanent buildings was undertaken in 1907, with Marquette Hall completed in 1910.\n\nIn 1911, the Jesuit schools in New Orleans were reorganized. The College of the Immaculate Conception, founded in 1847 in downtown New Orleans, split its high school and college divisions and became solely a secondary institution, now known as Jesuit High School. Loyola was designated as the collegiate institution and was chartered as Loyola University on July 10, 1912.\n\nThe university enrolls 5,000 students, including 3,000 undergraduates. The student to faculty ratio is 11 to 1. The Princeton Review features Loyola New Orleans in the 2010 edition of its annual book, The Best 371 Colleges. Loyola University New Orleans ranks 10th of the South regional universities in 2017 U.S. News & World Report Best College Ranking. The New York-based education services company says Loyola New Orleans offers students an outstanding undergraduate education.\n\nNearly all classes are taught by full-time faculty, 91 percent of whom hold doctoral or equivalent degrees in their areas of expertise. Loyola professors have been recognized nationally and internationally by the Pulitzer Committee, the National Science Foundation, the National Endowment for the Humanities, and by numerous other associations.\n\nLoyola is located in the historic Audubon Park District on St. Charles Avenue. Its original campus, now called the Main Campus, was founded on a tract of land purchased by the New Orleans Jesuits in 1889. The purchased portion of land was much larger than the current day campus; in fact, the original land purchase contained the land now occupied by both Loyola and Tulane universities and Audubon Place.[40] Through the next twenty years, portions of the original land purchase were sold to different entities to raise money for the new university, resulting in the current Main Campus area of 19 acres.\n\nBy the 1950s, most of the original campus had been developed and the university looked around for areas where it could expand. In the 1960s, J. Edgar Monroe, a major benefactor of the university, donated to Loyola a large undeveloped tract of land in Metairie where the university could either expand or move its entire location. After reviewing its options, including the sale of the original campus to Tulane University, the university decided to remain on St. Charles Avenue, subsequently selling off its property in Metairie in ten years as a condition of the donation.\n\nThe Louis J. Roussel Jr., Performance Hall on the Loyola campus, which stages symphony concerts, is named for the late New Orleans businessman Louis J. Roussel Jr.\n\nThe closure of St. Mary's Dominican College in 1984 provided an opportunity for Loyola to expand its campus. After renovation of the closed college and some new construction, the Broadway Campus was opened in 1986, with several university offices and programs, the school of law most significantly, moving to the new campus.", "doc_id": "8e77c23a-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Bile_bear", "document": "Bile bears, sometimes called battery bears, are bears kept in captivity to harvest their bile, a digestive fluid produced by the liver and stored in the gallbladder, which is used by some traditional Asian medicine practitioners. It is estimated that 12,000 bears are farmed for bile in China, South Korea, Laos, Vietnam, and Myanmar. Demand for the bile has been found in those nations as well as in some others, such as Malaysia and Japan.\n\nThe bear species most commonly farmed for bile is the Asiatic black bear (Ursus thibetanus), although the sun bear (Helarctos malayanus), brown bear (Ursus arctos) and every other species are also used (the only exception being the giant panda which does not produce UDCA). Both the Asiatic black bear and the sun bear are listed as Vulnerable on the Red List of Threatened Animals published by the International Union for Conservation of Nature. They were previously hunted for bile but factory farming has become common since hunting was banned in the 1980s.\n\nThe bile can be harvested using several techniques, all of which require some degree of surgery, and may leave a permanent fistula or inserted catheter. A significant proportion of the bears die because of the stress of unskilled surgery or the infections which may occur.\n\nFarmed bile bears are housed continuously in small cages which often prevent them from standing or sitting upright, or from turning around. These highly restrictive cage systems and the low level of skilled husbandry can lead to a wide range of welfare concerns including physical injuries, pain, severe mental stress and muscle atrophy. Some bears are caught as cubs and may be kept in these conditions for up to 30 years.\n\nThe value of the bear products trade is estimated as high as $2 billion. The practice of factory farming bears for bile has been extensively condemned, including by Chinese physicians.\n\nBear bile and gallbladders, which store bile, are ingredients in traditional Chinese medicine (TCM). Its first recorded use is found in Tang Ban Cao (Newly Revised Materia Medica, Tang Dynasty, 659 CE). The pharmacologically active ingredient contained in bear bile and gallbladders is ursodeoxycholic acid (UDCA); bears are the only mammals to produce significant amounts of UDCA.\n\nInitially, bile was collected from wild bears which were killed and the gall and its contents cut from the body. In the early 1980s, methods of extracting bile from live bears were developed in North Korea and farming of bile bears began. This rapidly spread to China and other regions. Bile bear farms were started to reduce hunting of wild bears, with the hope that if bear farms raised a self-sustaining population of productive animals, poachers would have little motivation to capture or kill bears in the wild.\n\nThe demand for bile and gallbladders exists in Asian communities throughout the world, including the European Union and the United States. This demand has led to bears being hunted in the US specifically for this purpose.", "doc_id": "8e77c2c6-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Jim_Prentice", "document": "Peter Eric James Prentice PC QC (July 20, 1956 \u2013 October 13, 2016) was a Canadian politician who served as the 16th premier of Alberta from 2014 to 2015. In the 2004 federal election he was elected to the House of Commons of Canada as a candidate of the Conservative Party of Canada. He was re-elected in the 2006 federal election and appointed to the cabinet as Minister of Indian Affairs and Northern Development and Federal Interlocutor for M\u00e9tis and Non-Status Indians. Prentice was appointed Minister of Industry on August 14, 2007, and after the 2008 election became Minister of Environment on October 30, 2008. On November 4, 2010, Prentice announced his resignation from cabinet and as MP for Calgary Centre-North. After retiring from federal politics he entered the private sector as vice-chairman of CIBC.\n\nPrentice entered provincial politics in his home province of Alberta, and ran for the leadership of the Progressive Conservative Association of Alberta to replace Dave Hancock, who was serving as interim Premier and party leader after Alison Redford's resignation. On September 6, 2014, Prentice won the leadership election, becoming both the leader of the Progressive Conservatives and as such the Premier, as his party held a majority in the Legislative Assembly of Alberta. As Premier of Alberta, Prentice formed a new cabinet consisting of some members from the previous government, but also new Ministers including two who did not hold seats in the Legislature\u2014Stephen Mandel and Gordon Dirks. All three stood as candidates in by-elections scheduled for October 27, 2014, and all three were elected with Prentice becoming the MLA for Calgary-Foothills. After introducing his first budget in 2015, Prentice declared an early provincial election on May 5, 2015. In the election, Prentice's PCs suffered an unprecedented defeat, dropping to third place in the legislature with just 10 seats \u2013 ending 44 years of Tory rule in Alberta, the longest consecutive reign for any political party at the provincial level in Canada. Despite winning re-election in Calgary-Foothills, on election night Prentice resigned as both PC leader and MLA and retired from politics after results indicated that the Alberta NDP had won a majority government.\n\nOn October 13, 2016, Prentice and three others were killed when the aircraft in which they were travelling crashed shortly after taking off from Kelowna, British Columbia. The flight was en route from Kelowna to Springbank Airport, just outside Calgary.\n\nPrentice joined the Progressive Conservative Party of Canada in 1976, and was active in Tory circles ever since. In the 1986 provincial election, Prentice ran for the Progressive Conservatives in Calgary Mountain View, being defeated by NDP candidate Bob Hawkesworth.\n\nDuring the early 1990s, Prentice served as the governing federal PC party's chief financial officer and treasurer (1990\u201393). Prentice first ran for Parliament as the nominated Progressive Conservative candidate in a spring 2002 by-election in the riding of Calgary Southwest that followed the retirement of Preston Manning as the riding's Member of Parliament (MP). When newly elected Canadian Alliance leader Stephen Harper replaced nominated CA candidate Ezra Levant in the by-election, Prentice withdrew from the race, following common practice to allow a party leader to win a seat uncontested so they may lead their party within Parliament.\n\nHe ran in the 2003 Progressive Conservative leadership election to support the \"United Alternative\" proposal to merge the PC party with the Canadian Alliance. He was seen by many as an alternative to the \"status quo\" candidate and front runner Peter MacKay. A basic platform of Prentice's campaign was that \"no one has ever defeated the Liberals with a divided conservative family.\" Prentice entered the 2003 convention day with some momentum, after delivering a passionate speech to the assembled delegates that encouraged Tories to be proud of their accomplishments, despite recent setbacks, and that recalled the sacrifices of Canadian soldiers who fought in the Battle of Passchendaele. He also unexpectedly received the support of fellow leadership challenger Craig Chandler, who withdrew early. Prentice ultimately emerged in second-place on the fourth ballot to the eventual winner MacKay. Consistent with his positions during the leadership race, Prentice was a supporter of the merger endorsed by both the CA and PC parties in December 2003 that formed the new Conservative Party of Canada.\n\nPrentice was the first declared candidate for the leadership of the new Conservative Party, announcing his run on December 7, 2003, the day after the new party was ratified by members of the PC Party. Prentice began his campaign in Calgary and toured parts of Ontario, specifically visiting Kingston, Ontario, the hometown of the first conservative leader Sir John A. Macdonald. However, he withdrew from the race on January 12, 2004, citing difficulty in raising new funds less than a year after his unsuccessful first leadership bid. The leadership election was won by Stephen Harper, who later became Prime Minister of Canada after the 2006 Canadian federal election.\n\nPrentice ran in the riding of Calgary Centre-North in the 2004 election for the new Conservative Party, and won the seat with 54% of the popular vote.\n\nAfter being sworn in as the MP for Calgary Centre North on July 16, Conservative Party Leader Stephen Harper named Prentice to the Shadow Cabinet as the Official Opposition Critic for Indian and Northern Affairs. In that role Prentice opposed the Tli Cho land claim agreement, which he said would make Canadian law secondary to Tlicho local law. Prentice was also a strong supporter of the proposed and controversial Mackenzie Valley pipeline. He criticized the Liberal government for its treatment of aboriginal women, and its alleged costs of administering the Residential School Claims program for aboriginal victims of abuse.\n\nPrentice described himself as a Red Tory in the Conservative Party and surprised many observers when he voted in favour of Bill C-38 supporting same-sex marriage.", "doc_id": "8e77c3fc-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Sexton_Foods", "document": "John Sexton & Company, also known as Sexton Quality Foods, was a broad line national wholesale grocer that serviced the restaurant, hotel and institutional trade from regional warehouses and truck fleets located in major metropolitan areas of the United States. Sexton Quality Foods eventually became US Foodservice in 1997. The company was established in Chicago in 1883 by John Sexton.\n\nJohn Peter Sexton was born June 29, 1858 in Dundas, Ontario, Canada to Michael and Ellen (Connors) Sexton. (Michael and Ellen had emigrated from County Clare, Ireland and married in Dundas on May 11, 1854 with Cornelius Sexton and Elizabeth Connors witnesses.)\n\nJohn Sexton worked in a general store in Niagara, Ontario 1874\u20131877. He immigrated to Chicago in 1877 at 18 and began working for various wholesale grocers in Chicago as a clerk and city salesman. During this time, he realized that there was an opportunity to specialize in selling quality teas, coffees and spices.\n\nJohn Sexton married Anna Louise Bartleman (born May 22, 1866 Chicago) on August 11, 1886 in Chicago. (Anna Louise's parents, Christian and Theresa (Albrecht) Bartleman had emigrated from Saxe-Coburg Gotha, Germany in the mid-1850s.) The couple had five children: Thomas George (born February 21, 1889 Chicago), Franklin (born 2/16/1891), Sherman J. (born 9/12/1892), Helen (Egan) (b.?) and Ethel (Marten) (born 1896). The family home was at 2238 North Dayton Street in Chicago. All three sons and both sons-in-law worked for the company in various roles.\n\nBy 1912, Sexton had outgrown the Lake and Franklin location. In 1913, Sexton purchased a 1-acre (4,000 m2) parcel of land on the north side of the Chicago River on the corner of Illinois and Orleans Streets. The majority of Sexton's customers at that time were not in Chicago. Access to the railroads was critical to growing the business. Institutional customers throughout the country would order groceries by the railcar from Sexton Quality Foods, and Sexton wanted his new building to be able to receive and dispatch rail shipments directly. In 1913, construction of a 300,000-square-foot (28,000 m2), six-story, fire sprinkler-protected, multi-use building designed by architect Alfred S. Alschuler was started.\n\nIn 1915, Sexton moved into the new building that housed the corporate offices, sales offices, country division, dry goods warehouse, food laboratory, refrigeration plant, and the Sexton Quality Foods manufacturing division, the Sunshine Kitchens, which produced private label sauces, soups and specialty products exclusively sold under the John Sexton & Co. banner. The first floor was divided into railcar receiving, railcar shipping, country parcel shipping, city delivery and city receiving. The building was large enough to unload three railcars simultaneously.\n\nBy 1921, Sexton had established distribution warehouses in San Francisco, Dallas and Omaha. This was done partly to improve customer service by reducing the time between order and delivery. In addition, a majority of canned fruits, jellies and preserves were grown and packed on the west coast. Considerable freight expense could be saved by dividing the products according to regional demand. These warehouses would later become important branches for Sexton Quality Foods.\n\nIn 1924, John Sexton decided to modernize the company's city delivery fleet by purchasing 26 electric trucks from The Commercial Truck Company of America in Philadelphia, and purchasing six gasoline-powered, 1.5 ton, six-wheeled trucks manufactured by Diamond T of Chicago. The modernization retired 50 horses, 35 grocery wagons and saved $12,000 in the first year. Each CT electric truck averaged 12 miles (19 km) per delivery day, and were extremely reliable, easy to drive and well adapted for city deliveries.[6] However, in cold weather, their batteries were less efficient and the hard rubber tires had poor traction on snow-covered streets. The result was a diminished range for the electric trucks. The electric trucks were in service until the late 1930s and were gradually phased out as the Chicago area expanded into the suburbs, the delivery route mileage increased, the roads got better and commercial truck reliability improved. The six Diamond T Trucks were used for suburban Chicago deliveries and averaged 180 miles (290 km) each per delivery day in 1924.\n\nIn 1897, Sexton Quality Foods began publishing a mail order catalog, targeted to rural customers, and selling food and farm supplies. Orders were shipped from Chicago via rail to regional terminals where railway express would make the final delivery to the customer. Sexton Quality Foods' catalog business was an important division for years. It was ultimately led by Sexton's second oldest son, Franklin, who later led the coffee and tea division and became the company treasurer. Known as the \"Country Division\", the majority of the products sold were coffee, spices, flour, canned fruits and canned vegetables. However, paint, motor oil, nails, roof tar and canvas were also sold. The Sexton Country Division flourished until automobiles became affordable and rural automobile ownership increased. Rural customers were then more likely to drive to town to make frequent smaller purchases rather than place large orders from Chicago. The last country division catalog was published in the late 1930s.\n\nIn 1928, at age 70, John Sexton stepped down as president of Sexton Quality Foods but remained its chairman. He asked his sons, Thomas, Franklin, and Sherman, who should lead the company. All agreed that Sherman was the best choice, and he became president of the company in 1928. Franklin remained the treasurer and Thomas remained vice president of merchandising. In 1930, at age 71, John Sexton died while on vacation in Los Angeles. After his death, the ownership of the company was divided between John Sexton's wife Annie Louise (33%) and their children Thomas (13.3%), Franklin (13.3%), Sherman (13.3%), Helen (13.3%) and Ethel (13.3%).\n\nBy late 1931, the John Sexton & Co. leadership was as follows: Annie Louise (Bartleman) Sexton, Chairman; Sherman J. Sexton, President (Sales and Advertising); Harold R. White, Vice President (Canned and Dried Foods); Franklin Sexton, Secretary (Tea and Coffee); and Edmund A. Egan, Treasurer (Maintenance and Operation). In 1933, Sexton Foods opened its first distribution center outside Chicago by renting a warehouse in Brooklyn and buying a delivery fleet of five Diamond T trucks dedicated to the New York market. The New York sales office was then supported by a regional distribution network that could provide next-day delivery. The same year, the first Sexton professional salesman training school was established, led by Henry A. Marten, husband of Ethel.\n\nSexton Quality Foods expanded its print advertising to the restaurant, college, hospital and food service trade publications in order to directly reach their customers. In addition, Sexton Quality Foods had a sales booth at all major trade conference for hospital administrators, college dietitians and restaurant associations. Sexton also published the first Sexton Cookbook in 1937, with two subsequent cookbooks published in 1941 and 1950. These compiled large-quantity recipes that Sexton customers had developed. Sexton Quality Foods frequently published pamphlets with menu ideas, food suggestions and business hints. Sexton Quality also published annual hardcover diaries that featured customers' recipes.", "doc_id": "8e77c5be-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Bloodhound", "document": "The bloodhound is a large scent hound, originally bred for hunting deer, wild boar and, since the Middle Ages, for tracking people. Believed to be descended from hounds once kept at the Abbey of Saint-Hubert, Belgium, in French it is called, le chien de Saint-Hubert.\n\nThis breed is famed for its ability to discern human scent over great distances, even days later. Its extraordinarily keen sense of smell is combined with a strong and tenacious tracking instinct, producing the ideal scent hound, and it is used by police and law enforcement all over the world to track escaped prisoners, missing people, and lost pets.\n\nBloodhounds weigh from 36 to 72 kg (80 to 160 lbs). They are 58 to 69 cm (23 to 27 inches) tall at the withers. According to the AKC standard for the breed, larger dogs are preferred by conformation judges. Acceptable colors for bloodhounds are black, liver, and red. Bloodhounds possess an unusually large skeletal structure with most of their weight concentrated in their bones, which are very thick for their length. The coat, typical for a scent hound, is hard and composed of fur alone, with no admixture of hair.\n\nThis breed is gentle and is tireless when following a scent. Because of its strong tracking instinct, it can be willful and somewhat difficult to obedience train and handle on a leash. Bloodhounds have an affectionate and even-tempered nature to humans, making them excellent family pets.\n\nCompared to other purebred dogs, Bloodhounds suffer an unusually high rate of gastrointestinal ailments, with gastric dilatation volvulus (bloat) being the most common type of gastrointestinal problem. The breed also suffers an unusually high incidence of eye, skin, and ear ailments; thus these areas should be inspected frequently for signs of developing problems. Owners should be especially aware of the signs of bloat, which is both the most common illness and the leading cause of death of Bloodhounds. The thick coat gives the breed the tendency to overheat quickly.\n\nBloodhounds in a 2004 UK Kennel Club survey had a median longevity of 6.75 years, which makes them one of the shortest-lived dog breeds. The oldest of the 82 deceased dogs in the survey died at the age of 12.1 years. Bloat took 34% of the animals, making it the most common cause of death in Bloodhounds. The second leading cause of death in the study was cancer, at 27%; this percentage is similar to other breeds, but the median age of death was unusually young (median of about 8 years). In a 2013 survey, the average age at death for 14 Bloodhounds was 8.25 years.\n\nThe Bloodhound's physical characteristics account for its ability to follow a scent trail left several days in the past. The olfactory bulb in dogs is roughly 40 times bigger than the olfactory bulb in humans, relative to total brain size, with 125 to 220 million olfactory receptors. Consequently, dogs have an olfactory sense 40 times more sensitive than that of a human. In some dog breeds, such as Bloodhounds, the olfactory sense has nearly 300 million receptors.\n\nThe large, long pendent ears serve to prevent wind from scattering nearby skin cells while the dog's nose is on the ground; the folds of wrinkled flesh under the lips and neck\u2014called the shawl\u2014serve to catch stray scent particles in the air or on a nearby branch as the Bloodhound is scenting, reinforcing the scent in the dog's memory and nose. However, not all agree that the long ears and loose skin are functional, some regarding them as a handicap.\n\nThere are many accounts of Bloodhounds successfully following trails many hours, and even several days old, the record being of a family found dead in Oregon, in 1954, over 330 hours after they had gone missing. The Bloodhound is generally used to follow the individual scent of a fugitive or lost person, taking the scent from a 'scent article' \u2013 something the quarry is known to have touched, which could be an item of clothing, a car seat, an identified footprint, etc. Many Bloodhounds will follow the drift of scent a good distance away from the actual footsteps of the quarry, which can enable them to cut corners and reach the end of the trail more quickly. In America, sticking close to the footsteps is called 'tracking', while the freer method is known as 'trailing' (in the UK, 'hunting'), and is held to reflect the Bloodhound's concentration on the individual human scent, rather than that of, say, vegetation crushed by the feet of the quarry. Having lost a scent, a good Bloodhound will stubbornly cast about for long periods, if necessary, in order to recover it. The Bloodhound is handled on a tracking harness, which has a metal ring above the shoulders, to which a leash is attached, so that the hound's neck is not jerked up when the leash becomes taut, as it would with a collar. The leash is at least long enough to allow the hound to cross freely in front of the handler, some handlers preferring quite a short leash, giving better communication with the hound, others liking something longer, maybe 20 or 30 feet.", "doc_id": "8e77c6d6-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Mission_Hill,_Boston", "document": "Mission Hill is a 0.75 square mile (2 square km), primarily residential neighborhood of Boston, bordered by Roxbury, Jamaica Plain and Fenway-Kenmore and the town of Brookline. It is home to several hospitals and universities, including Brigham and Women's Hospital and New England Baptist Hospital. Mission Hill is known for its brick row houses and triple decker homes of the late 19th century. The population was estimated at 15,883 in 2011.\n\nThe neighborhood is roughly bounded by Columbus Avenue and the Boston neighborhood of Roxbury to the east, Ruggles Street to the northeast and the Olmsted designed Riverway/Jamaicaway, and the town of Brookline to the west. The Historic District was designated by the Boston Landmarks Commission in 1985 and is roughly bounded by Smith Street, Worthington Street, Tremont Street (to the south), and Huntington Avenue (to the west). The Mission Hill neighborhood is immediately north of the Boston neighborhood of Jamaica Plain. It is served by the MBTA Green Line E branch and the Orange Line, and is within walking distance of the Boston Museum of Fine Arts and the Gardner Museum. \"The Hill\" overlaps with about half of the Longwood Medical and Academic Area, home to 21 health care, research, and educational institutions which together provides the largest employment area in the City of Boston outside of downtown Boston. Due to these adjacencies, the neighborhood is often struggling with institutional growth taking residential buildings and occupying storefront commercial space. Recent years have seen new retail stores, restaurants, and residential development giving the neighborhood a stronger political voice and identity, as some of the educational institutions have made commitments to house all or most of their about 2000 undergraduate students in newly erected campus housing, including several new high-rise dormitories. People aged 20 to 24 account for 32% of the population currently living in Mission Hill.\n\nThe Mission Hill Triangle is an architectural conservation district with a combination of freestanding houses built by early wealthy landowners, blocks of traditional brick rowhouses, and many triple-deckers. Many are now condominiums, but there are also several two-family and some single-family homes.\n\nThe neighborhood was named in March 2008 as one of 25 \"Best ZIP Codes in Massachusetts\" by The Boston Globe, citing increased value in single-family homes, plentiful restaurants and shopping, a marked racial diversity, and the behavioral fact that 65% of residents walk, bike, or take public transit to their work.\n\nThe neighborhood has two main commercial streets: Tremont Street and Huntington Avenue. Both have several small restaurants and shops. Mission Hill is at the far western end of Tremont Street, with Government Center at the far eastern end. Mission Hill\u2019s main zip code is 02120. Additionally, a very small portion of the southeastern edge uses the code 02130, areas adjacent to the Longwood Medical Area use 02115 and two streets on the far western edge use 02215.\n\nParker Hill, Back of The Hill, and Calumet Square are areas within the Mission Hill, an officially designated neighborhood in Boston (as attested by numerous signs prohibiting parking without a suitable Mission Hill neighborhood residential sticker, which only residents can procure legally).\n\nBrigham Circle, located at the corner of Tremont and Huntington is the neighborhood's commercial center, with a grocery store (Stop & Shop), drug stores (Walgreens), bistros, banks (Santander Bank is in Hanlon Square), and taverns.\n\nOne block up the hill from Brigham Circle is Boston's newest park, Kevin W. Fitzgerald Park (formerly Puddingstone Park) created when a new $60-million mixed use building was completed in 2002.\n\nOn Tremont Street is Our Lady of Perpetual Help Basilica (1878, Schickel and Ditmars, 1910 towers addition by Franz Joseph Untersee), an eponymous landmark building that dominates the skyline of the area. The church was chosen as the location for the funeral of Senator Edward M. Kennedy on Saturday, August 29, 2009.\n\nOne is called Kevin W. Fitzgerald Park. Formerly named Puddingstone Park because of the local rock sources, the park includes lawn space and asphalt walkways for people to walk on. The walkway is lined with benches for people to rest and enjoy the various views such as Lower Roxbury, the Fenway, and Back Bay. This park was previously one of the five quarries in Boston. This park was known as the Harvard Quarry. The operation of the quarry was ceased around 1910 and this left a 65-foot-high quarry wall. In the 1990s, the open space planning committee worked on preserving public access to the quarry. The community and the developer decided together that the walls of the old quarry would be preserved and they would create a new 6-acre open space for the community at the top of the puddingstone bowl. Harvard Quarry Urban Wild was then named Puddingstone Park. In November 2006, the park was renamed Kevin Fitzgerald Park in honor of the former Massachusetts State Representative. Most of the land is already being developed on for more housing and institutional purposes. Only 6.2 acres of land are protected for preservation of public access.\n\nMcLaughlin Park is another park located in Mission Hill. An article posted in the Mission Hill Gazette on April 3 talked about the park being renovated on a $430,000 budget. A direct quote taken from the article states the plan for the renovations, \"The City presented a plan for the renovation in September that would lay a loop path around the upper terrace; build an overlook area along the southeastern portion of the terrace; repair Ben's Tower; add a new set of stairs from the upper terrace to the lower terrace; and address other maintenance issues.\" Ben's tower is a memorial for a child named Ben who was from Mission Hill and enjoyed playing in the McLaughlin Park. Ben died of cancer.\n\nThe Mission Hill Health Movement is a community-based organization addressing an array of health conditions and other issues of residents of the Mission Hill community and surrounding neighborhoods, such as obesity, diabetes, heart disease, mental illness and depression, exercise and energy levels, personal and social responsibility for health, and access to health care. They sponsor the twice-weekly Mission Hill Farmers markets throughout the months of June to November, the annual community health fair (with MCPHS University) and a summer food fair in September, and low-cost fresh produce and bread distribution, the $2 bag program, with Fair Foods of Dorchester. At the Tuesday and Thursday farmers' markets, local farmers sell their freshly picked produce. MHHM sponsors several self-help health programs, including a walking group, a Women's Health Group, and a Diabetes Self-Management Group to educate newly diagnosed and current diabetics and pre-diabetics about how to live responsibly with it, to improve overall health and ease the responsibilities of living day-to-day with chronic diabetes. In 2011, the Mission Hill Main Streets, Tobin Community Center, Mission Hill Health Movement, and Sociedad Latina sponsored the first Mission Hill healthy food festival. Longwood-based hospitals, such as Beth Israel Deaconess Medical Center and Boston Children's Hospital, and schools such as MCPHS University (formerly Massachusetts College of Pharmacy and Health Sciences), and the Whittier Street Health Center, tabled at this festival to field questions and distribute informative literature. The Boston Collaborative for Food & Fitness, Boston Vegetarian Society, Cooking Matters, and Sociedad Latina also offered helpful information. Each Spring, the Mission Hill Health Movement sponsors a community health fair, convening 20-40 local institutions, organizations, and neighborhood businesses during 2011, and now 66 such exhibitors in 2015, providing health information, screening tests, and health-supporting food. They also provide a \"FEET FIRST\" walk on Thursdays at 10 am, rain or shine, at 1534 Tremont Street, exploring the colorful and visually interesting Mission Hill neighborhood and contiguous areas, walking through the Fens, the Rose Garden, Jamaica Plain, and back. \"Walks will terminate at the Brigham Circle Farmers Market from mid-June until the end of October.\"", "doc_id": "8e77c848-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Fallingwater", "document": "Fallingwater is a house designed by the architect Frank Lloyd Wright in 1935 in the Laurel Highlands of southwest Pennsylvania, about 70 miles (110 km) southeast of Pittsburgh. It is built partly over a waterfall on Bear Run in the Mill Run section of Stewart Township, Fayette County, Pennsylvania. The house was designed to serve as a weekend retreat for Liliane and Edgar J. Kaufmann, the owner of Pittsburgh's Kaufmann's Department Store.\n\nAfter its completion, Time called Fallingwater Wright's \"most beautiful job\" and it is listed among Smithsonian's \"Life List of 28 Places to See Before You Die\". The house was designated a National Historic Landmark on May 11th, 1976. In 1991, members of the American Institute of Architects named Fallingwater the \"best all-time work of American architecture\" and in 2007, it was ranked 29th on the list of America's Favorite Architecture according to the AIA.\n\nThe house and seven other Wright constructions were inscribed as a World Heritage Site under the title, \"The 20th-Century Architecture of Frank Lloyd Wright\", in 2019.\n\nAs reported by Frank Lloyd Wright's apprentices at Taliesin, Kaufmann was in Milwaukee on September 22, nine months after their initial meeting, and called Wright at home early Sunday morning to surprise him with the news that he would be visiting him that day. Wright had told Kaufmann in earlier communications that he had been working on the plans but had not actually drawn anything. After breakfast, amid a group of very nervous apprentices, Wright calmly drew the plans in the two hours in which it took Kaufmann to drive to Taliesin. Witness Edgar Tafel, an apprentice at the time, stated later that when Wright was designing the plans he spoke of how the spaces would be used, directly linking form to function.\n\nWright designed the home above the waterfall: Kaufmann had expected it to be below the falls to afford a view of the cascades. It has been said that he was initially very upset with this change.\n\nThe Kaufmanns planned to entertain large groups so the house needed to be larger than the original plot allowed. They also requested separate bedrooms as well as a bedroom for their adult son and an additional guest room. A cantilevered structure was used to address these requests. The structural design for Fallingwater was undertaken by Wright in association with staff engineers Mendel Glickman and William Wesley Peters, who had been responsible for the columns in Wright's revolutionary design for the Johnson Wax Headquarters.\n\nPreliminary plans were issued to Kaufmann for approval on October 15, 1935, after which Wright made an additional visit to the site to generate a cost estimate for the job. In December 1935, an old rock quarry was reopened to the west of the site to provide the stones needed for the house's walls. Wright visited only periodically during construction, assigning his apprentice Robert Mosher as his permanent on-site representative. The final drawings were issued by Wright in March 1936 with work beginning on the bridge and main house in April.\n\nThe construction was plagued by conflicts between Wright, Kaufmann, and the contractor. Uncomfortable with what he saw as Wright's insufficient experience using reinforced concrete, Kaufmann had the architect's daring cantilever design reviewed by a firm of consulting engineers. Upon receiving their report, Wright took offense, immediately requesting that Kaufmann return his drawings and indicating that he was withdrawing from the project. Kaufmann relented to Wright's gambit and the engineer's report was subsequently buried within a stone wall of the house.\n\nFor the cantilevered floors, Wright and his team used upside-down T-shaped beams integrated into a monolithic concrete slab which formed both the ceiling of the space below and provided resistance against compression. The contractor, Walter Hall, also an engineer, produced independent computations and argued for increasing the reinforcing steel in the first floor's slab - Wright refused the suggestion. There was speculation over the years that the contractor quietly doubled the amount of reinforcement versus Kaufmann's consulting engineers doubling the amount of steel specified by Wright. During the process of restoration begun in 1995, it was confirmed that additional concrete reinforcement had been added.\n\nIn addition, the contractor did not build in a slight upward incline in the formwork for the cantilever to compensate for its settling and deflection. Once the formwork was removed, the cantilever developed a noticeable sag. Upon learning of the unapproved steel addition, Wright recalled Mosher. With Kaufmann's approval, the consulting engineers had a supporting wall installed under the main supporting beam for the west terrace. When Wright discovered it on a site visit, he had Mosher discreetly remove the top course of stones. When Kaufmann later confessed to what had been done, Wright showed him what Mosher had done and pointed out that the cantilever had held up for the past month under test loads without the wall's support.", "doc_id": "8e77c92e-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/James_Edward_Edmonds", "document": "Brigadier-General Sir James Edward Edmonds CB CMG (25 December 1861 \u2013 2 August 1956) was an officer of the Royal Engineers in the late-Victorian era British Army who worked in the Intelligence Division, took part in the creation of the forerunner of MI5 and promoted several spy scares, which failed to impress Richard Haldane, the Secretary of State for War (1905\u20131912). Viscount Esher said that Edmonds was a silly witness from the War Office [who saw] rats everywhere - behind every areas.\n\nIn 1911, Edmonds returned to soldiering as the chief of staff of the 4th Division, despite being advised that it was a bad career move. In the manoeuvres of 1912, with the 3rd Division, the 4th Division took part in the defeat of I Corps, commanded by Douglas Haig and the only permanent corps headquarters in the army. The 4th Division training emphasised the retreat despite such tactics being barred by the War Office. When the First World War began, Edmonds thought that the division was well trained but lacking much of the equipment provided to German divisions.\n\nThe 4th Division fought at the Battle of Le Cateau on 26 August and then participated in the Great Retreat, an ordeal which Edmonds, 53 years old, found most trying, buoyed up only be his pre-war training and belief that it would end in a counter-offensive. Edmonds found that once there was time to rest, that he could not and was transferred to GHQ, the headquarters of the British Expeditionary Force, where he feared being sent home. Edmonds spent the rest of the war at GHQ and in 1918 was made deputy engineer-in-chief. Edmonds retired from the army in 1919 with the honorary rank of Brigadier-General.\n\nEdmonds became the Director of the Historical Section of the Committee of Imperial Defence on 1 April 1919 and was responsible for the post-war compilation of the 28-volume Military Operations section of the History of the Great War. Edmonds wrote eleven of the fourteen volumes titled Military Operations, France and Belgium, dealing with the Western Front. \"Military Operations: Italy 1915\u20131919\", the final volume of the series, was published in 1949, just after Edmonds retired. Edmonds spent his retirement at Brecon House, Long Street, Sherborne, Dorset, where he died on 2 August 1956.\n\nJames Edward Edmonds was born in Baker Street, London, on 25 December 1861 to James Edmonds, a master Jeweller and his wife Frances Amelia Bowler, a family that could trace its ancestry to Fowey in Cornwall. Edmonds was educated as a day boy at King's College School, accommodated in a wing of Somerset House. Edmonds claimed that his father taught him languages at breakfast, to the extent that he was familiar with German, French, Italian and Russian. Edmonds did not learn Latin or Greek at school but studied science and geology. Edmonds visited France when he was eight and saw Napoleon III, then returned two years later, soon after the end of the Franco-Prussian War (1870\u20131871). In his unpublished Memoirs, Edmonds wrote that he was surprised to see that the Arc de Triomphe had not been demolished and that he became sceptical of the reports of war correspondents for the rest of his life.\n\nWhile Edmonds was in Amiens, still under German occupation, a Bavarian officer said \"Ve haf beat de Franzmen, you vill be next\" (sic). This determined Edmonds's father to teach both his sons German and to put them into the army. Edmonds's teachers encouraged him to study maths at Cambridge but when one of his friends passed third in the entrance exam to the Royal Military Academy, Woolwich (RMA Woolwich), Edmonds applied. In July 1879 Edmonds took the RMA Woolwich entrance exam, passed first was accepted for a place. At the end of the course Edmonds achieved the highest marks that instructors could remember, was awarded the Pollock Gold Medal for Efficiency and prizes for mathematics, mechanics, fortification, geometrical drawing, military history, drills and exercises and exemplary conduct. Edmonds won the Sword of Honour for the Best Gentleman Cadet and was mentioned by the commander-in-chief of the Army, Prince George, Duke of Cambridge.\n\nEdmonds was commissioned into the Corps of Royal Engineers on 22 July 1881. Edmonds spent four years based in Chatham and a year in Malta studying submarine mining, a matter which the Royal Navy could not be expected to undertake. Edmonds's intellect was recognised with the nickname Archimedes. After returning from Malta, Edmonds was posted to Hong Kong with two companies of engineers to garrison the colony after a Russian invasion scare. The 33rd Engineer Company, in which Edmonds served, was one of those chosen. When the orders were received the company commander went sick and his deputy requested to be excused as his wife was pregnant. The two companies reached Hong Kong, one with eight men and the other about thirty; the absentees were either ill, invalid or on attachment and had missed the boat.\n\nEdmonds found that rocky outcrops just below the surface in Hong Kong harbour had not been charted and were a danger to shipping, occasionally the cause of serious accidents. Edmonds organised their removal by trailing a rail between two rowing boats and lowering a diver to place an explosive charge on the top. The posting was uneventful; in 1888 Edmonds returned to Chatham after three months' sick leave in Japan and sojourns US and Canada, to join the 38th Mining Company as Assistant Instructor. Apparently Edmonds's main duty was to play golf with the Chief Instructor in the afternoons. Edmonds was promoted to captain in 1890 and returned to the RMA Woolwich as an instructor in fortification. During his six years as an instructor Edmonds spent his long vacations abroad learning Russian and other languages.\n\nIn 1895 Edmonds took the entrance exam for the Staff College, Camberley and passed first again; during the year he married Hilda Margaret Ion (died 1921), daughter of the Rev. Matthew Wood; they had one daughter. Twenty-four candidates were chosen by application and eight men with near misses in the examinations could enter by nomination, one of whom was Douglas Haig. Edmonds felt intellectually superior to his peers and wrote later that only George Macdonogh was an exception, a man who could also understand some of the more recondite subjects, like the decoding of cyphers. In his Memoirs, Edmonds wrote that he was often paired with Haig because he was good with detail and Haig a generalist. Edmonds passed out in 1899 at the top of his class, one of the most successful and popular students of the era, noted for his conversation which had become even more interesting and appreciated by, amongst others, Douglas Haig, Aylmer Haldane and Edmund Allenby. Edmonds wrote that Allenby was a blockhead, which Cyril Falls later called \"an error typical of Edmonds's worst side\".\n\nEdmonds overheard Colonel George Henderson predict that Haig would become commander in chief. While at the college, Edmonds co-wrote with his brother in law, W. Birkbeck Wood, \"The History of the Civil War in the United States 1861\u20131865\" (1905). The book was well received by reviewers who wrote that the book would be appealing to soldiers and to students of history alike. The book was full of statistical information, although the reviewer in the Times Literary Supplement thought that in this, the authors had gone a little too far. The book gave prominence to novel aspects of the war including the use of cavalry, battles of attrition and the turning of volunteers into disciplined soldiers. The book was in print for thirty years and by 1936 was in its fourth edition and was in use at West Point.\n\nAfter seven years in intelligence, Edmonds wanted a change and did not want to be subordinate to General Henry Wilson, the new DMO, towards whom, Edmonds harboured a certain enmity. Edmonds was offered the posts of commandant of the School of Military Engineering or General Staff Officer (Grade I) (GSO I, the divisional chief of staff) of the 4th Division (Major-General Thomas Snow). Edmonds joined the 4th Division on 1 March 1911, despite being told that it was a bad career move to leave the War Office. Edmonds had gone on leave for three months before transferring during which he had translated French and Russian works on battlefield engineering. Snow, a somewhat irascible man, quickly gained confidence in Edmonds and told him, \"I provide the ginger and you provide the brains\". The division trained and in the corps manoeuvres of 1912, the 3rd Division and the 4th Division defeated I Corps which was under the command of Douglas Haig. An important part of the divisional training was the retreat, despite this being banned by the War Office. On the eve of the war, Edmonds thought that his division was prepared but ill-equipped compared to the items he had seem in use in the German Army when he attended the manoeuvres of 1908. The Germans had machine-guns flare pistols, trench mortars, ambulances, artillery telephones and field kitchens. The 4th Division was based at Great Yarmouth in August 1914, ready to repel a German invasion attempt.", "doc_id": "8e77cadc-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Thakazhi_Sivasankara_Pillai", "document": "Thakazhi Sivasankara Pillai (17 April 1912 \u2013 10 April 1999), popularly known as Thakazhi after his place of birth, was an Indian novelist and short story writer of Malayalam literature. He wrote over 30 novels and novellas and over 7 short stories focusing on the lives of the oppressed classes. Known for his works such as Kayar (Coir, 1978) and Chemmeen (Prawns, 1956), Pillai was a recipient of the Padma Bhushan, the third highest Indian civilian award. He was also a recipient of the Jnanpith, India's highest literary award, awarded in 1984 for the novel Kayar.\n\nThakazhi Sivasankara Pillai was born on April 17, 1912 in Thakazhy, a small village in Kuttanad in present-day Alappuzha district of Kerala to Poypallikalathil Sankara Kurup, who was the brother of Guru Kunchu Kurup, a doyen of Kathakali and Aripurathuveettil Parvathy Amma. After early tutoring by his father and Chakkampurathu Kittu Asan, a local teacher, Pillai had his primary education at a local school in Thakazhi and passed 7th standard examination from the English School in Ambalappuzha. Subsequently, he did his high school education, first at a high school in Vaikom and later at a school in Karuvatta, where he had the opportunity to study under Kainikkara Kumara Pillai, who was the headmaster of the school during that period. After passing 10th standard, he moved to Trivandrum and passed the pleader examination from the Government Law College, Thiruvananthapuram. He started his career as a reporter at Kerala Kesari daily but moved to legal career by practising under a lawyer named P. Parameshwaran Pillai at the munsif court of Ambalappuzha. It was during this time, he was attracted by the communist movement and he participated in the functioning of the Sahitya Pravarthaka Sahakarana Sangham (Writers' Cooperative Society). He presided Kerala Sahitya Akademi and was also associated with Sahitya Akademi as a member of its general council.\n\nPillai married Thekkemuri Chembakasseril Chirakkal Kamalakshy Ammai, affectionately called by him as Katha, in 1934 and the couple had one son and four daughters. He died on April 10, 1999, at the age of 86 (A week before his 87th birthday), survived by his wife, who died on June 1, 2011, and their five children.\n\nPillai, whose works would later earn him the moniker, Kerala Maupassant, started writing at an early age and his associations with Kainikkara Kumara Pillai during his school days and with Kesari Balakrishna Pillai during his Thiruvananthapuram days are known to have helped the aspiring writer in his career, it was the latter who introduced him to European literature. His first short story was Daridran (The Poor) which was published in 1929. After many short stories, he wrote Thyagathinu Prathiphalam (Fruits of sacrifice) in 1934 which primarily dealt with the social injustices prevalent during that time. This was the first of his 39 novels; he also published 21 anthologies composed of over 600 short stories, two plays and four memoirs.\n\nPillai's literary works are known to portray the society in Kerala in the mid-20th century. Thottiyude Makan (Scavenger's Son), a story about a scavenger who strives unsuccessfully to keep his son from continuing the family profession was published in 1947 and is known to be the first realistic novel in Malayalam literature. His political novel, Randidangazhi (Two Measures, 1948), projected the evils of the feudal system that prevailed in Kerala then, especially in Kuttanad. The film adaptation, directed and produced by P. Subramaniam from a screenplay by Thakazhi himself, received a certificate of merit at the National Film Awards in 1958.\n\nIn 1956, Pillai published his love epic Chemmeen (Prawns), which was a departure from his earlier line of realism and the novel received critical acclaim, becoming the first post-colonial Indian novel to be translated into English; the English translation was accepted into the Indian Series of UNESCO Collection of Representative Works. It told a tragic love story against the backdrop of a fishing village in Alappuzha. The novel and its film adaptation, also titled Chemmeen (1965), earned him national and international fame. Chemmeen was translated into 19 world languages and adapted into film in 15 countries. The film adaptation, directed by Ramu Kariat, won the National Film Award for Best Feature Film in 1965. His next notable work was Enippadikal (Rungs of the Ladder), published in 1964, which traces the careerism of an ambitious bureaucrat whose lust for power and position becomes his own undoing. The novel was adapted into a movie in 1973 by Thoppil Bhasi. Anubhavangal Paalichakal, another novel he published in 1966, was also made into a feature film by K. S. Sethumadhavan, in 1971, with Sathyan, Prem Nazir and Sheela in the lead roles.\n\nPillai wrote Kayar (Coir) in 1978, a long novel extending to over 1000 pages, covering the history of several generations in Kuttanad for over 200 years and is considered by many as his masterpiece, n spite of the popularity of Chemmeen. The novel deals with hundreds of characters over four generations, bringing back to life an axial period (1885\u20131971) during which feudalism, matriliny, and bonded labour gave way to conjugal life and to universal access to land ownership, and later, to decolonisation and the industrial revolution of the 1960s.\n\nPillai wrote his only play in 1946 titled Thottilla, which was a social drama; it was performed on many stages by Kerala People's Arts Club. He published four autobiographical books and two other works. Four of his short stories were the base of a film, Naalu Pennungal, made by Adoor Gopalakrishnan in 2007, which he termed as his homage to the writer.\n\nPillai received the Sahitya Akademi Award in 1957 for the love epic, Chemmeen. Kerala Sahitya Akademi selected Enippadikal for their annual award for novels in 1965. His Novel, Kayar was selected for the Vayalar Award in 1984, and he received the highest Indian literary award, Njanapeedam in 1984 and a year later, the Government of India awarded him the third highest civilian honour of the Padma Bhushan. Sahitya Akademi elected him as a distinguished fellow in 1989; he had already been a distinguished fellow of the Kerala Sahitya Akademi by then. In 1994, the Government of Kerala awarded him Ezhuthachan Puraskaram, their highest literary honour. In 1996 he was conferred with an honorary doctorate (D.Litt) by Mahatma Gandhi University. India Post issued a commemorative postage stamp depicting his image in 2003, under the Jnanapith Award Winners series. Sahitya Akademi commissioned a documentary film on the life of Pillai to be made and M. T. Vasudevan Nair made Thakazhi, a documentary film of 57 minutes length, which was released a year before Pillai's death in 1998. The Government of Kerala acquired Sankaramanagala, the ancestral home of Pillai, in 2000 and a museum, Thakazhi Memorial Museum was set up in 2001, honoring the writer's memory.", "doc_id": "8e77cc1c-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/RAF_Joyce_Green", "document": "Joyce Green, at Long Reach, near Dartford, was one of the first Royal Flying Corps (RFC) airfields. It was established in 1911 by Vickers Limited (the aircraft and weapons manufacturer) who used it as an airfield and testing ground. At the outbreak of World War I in 1914, the RFC followed and established a base. Subject to frequent flooding and a reputation as being unsuitable and too dangerous for training, it was eventually replaced by a more suitable site at RAF Biggin Hill.\n\nThere were two parts to Joyce Green's military operations; the RFC, and the Wireless Experimental establishment. The latter were the first to move out in 1917 (after exhaustive searching south of London) when they found an ideal site on a farmer's field near the village of Biggin Hill; the RFC were soon to recognize the new site's suitability for flying and its strategic location, and soon followed, transferring there on 13 February 1917. The RFC took with them their Bristol Fighters, leaving Joyce Green with only a pilots pool and ground crew. Once the RFC had moved out of the aerodrome, Vickers continued their testing work, until moving to Brooklands aerodrome. Following the Armistice with Germany the airfield was closed by December 1919.\n\nAir Vice Marshal Gould Lee wrote in his book \"Open Cockpit\", chapter 17: \u2018To use this waterlogged field for testing every now and then was reasonable and to take advantage of it as an emergency landing ground for Home Defence forces was credible, but to employ it as a flying training station was folly and as a Camel training station was lunacy. A pupil taking off with a choked or failing engine had to choose, according to wind direction, between drowning in the Thames (half a mile wide at this point), or crashing into the Vickers TNT works, or hitting one of their several high chimney stacks, or sinking into a vast sewage farm, or killing himself and numerous patients in a large isolation hospital, or being electrocuted in an electrical substation with acres of pylons and cables; or trying to turn and get back to the aerodrome. Unfortunately, many pupils confronted with disaster tried the last course and span to their deaths.\u2019 \n\nJimmy McCudden VC in his book \"Flying Fury\" described the airfield (where he and others like of Mick Mannock VC spent much time) as a \"quiet little spot near Dartford\", below sea level at the side of the Thames. The Corp resided in a wooden barrack block, and the actual airfield (grass runways) were located almost next to the River Thames, where many pilots lost their lives by drowning.\n\nThe Wireless Radio Unit found the foul weather, incessant mist, the state of the ground, the cold, and damp at Joyce Green non conducive to the best research. Numerous accidents, several fatalities and the planned formation of the Royal Air Force in 1918, led to the Wireless Testing Park eventually being moved in February 1917 to Biggin Hill.\n\nJoyce Green can't take all the blame for pilot losses however, the Sopwith Camel was a demanding plane of all but the most experienced pilot, and had had a fearsome reputation for spinning out of control during tight turns, causing the deaths of many young pilots during their training period. Also as the war progressed the quality of new students progressively declined aggravating matters, with virtually no safety measures in place, one half of all pilots in training were killed at the many training bases.", "doc_id": "8e77ccf8-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Tropical_cyclones_in_1998", "document": "During 1998, tropical cyclones formed within seven different tropical cyclone basins, located within various parts of the Atlantic, Pacific, and Indian Oceans. During the year, a total of 125 tropical cyclones had formed, with 72 of them have been named by various weather agencies when they attained maximum sustained winds of 35 knots (65 km/h, 40 mph). The strongest tropical cyclones of the year are Zeb, Ron and Susan which peaked with a pressure of 900 hPa (26.58 inHg). Hurricane Mitch of late October is the deadliest tropical cyclone of this year, which was blamed for greater than 11,000 deaths as it catastrophically affected Central America, and Mexico as a Category 5 major hurricane. Meanwhile, Georges became the costliest, with the damages amounting to $9.37 billion, which also became the costliest in the history of the Dominican Republic and the country of Saint Kitts and Nevis. Four Category 5 tropical cyclones were formed in 1998.\n\nAn average Atlantic hurricane season features 12 tropical storms, 6 hurricanes, and 3 major hurricanes, and features an Accumulated Cyclone Energy (ACE) count of 106. In 2020 in the North Atlantic basin, all of the statistics fell well above listed, featuring a record-breaking 30 tropical storms, 13 hurricanes, and 6 major hurricanes, with an ACE total of 178.\n\nThe 1998 Atlantic hurricane season was one of the most disastrous Atlantic hurricane seasons on record, featuring the highest number of storm-related fatalities in over 218 years and one of the costliest ever at the time. The season had above average activity, due to the dissipation of the El Ni\u00f1o event and transition to La Ni\u00f1a conditions.\n\nThe most notable storms were Hurricane Georges and Hurricane Mitch. Georges devastated Saint Kitts and Nevis, Puerto Rico and the Dominican Republic as a major Category 3 storm but peaked as a high-end Category 4 hurricane just before moving through many of the Caribbean Islands before affecting the southern US mainland, making its landfall near Biloxi, Mississippi, causing significant damage and at least 600 confirmed deaths while Mitch, the strongest storm of the season, was a very powerful and destructive late-season Category 5 hurricane that affected much of Central America before making landfall in Florida as a tropical storm. The significant amount of rainfall that Mitch produced across Central America caused significant damage and killed at least 11,000 people, making the system the second deadliest Atlantic hurricane in recorded history, behind only the Great Hurricane of 1780. Mitch, was later tied with 2007's Hurricane Dean for the eighth-most intense Atlantic hurricane ever recorded.\n\nHurricanes Georges and Mitch caused $9.37 billion in damage and $6.08 billion (1998 USD) in damage, respectively and the 1998 Atlantic hurricane season was at the time, the second-costliest season ever, after the 1992 season. However, it is now the eleventh costliest season as it was surpassed by the 2005 Atlantic hurricane season.\n\nAn average Pacific hurricane season features 15 tropical storms, 9 hurricanes, and 4 major hurricanes, and features an Accumulated Cyclone Energy (ACE) count of 132.\n\nThe season produced 13 named storms, slightly below the average of 15 named storms per season. However, the season total of nine hurricanes was one above the average, and the total of six major hurricanes surpassed the average of three. Activity during the season was hindered by the northward movement of the Intertropical Convergence Zone (ITCZ). The ITCZ, which is normally situated south of the Gulf of Tehuantepec, shifted northward into Central and Southern Mexico, making the cyclone closer to cooler sea surface temperatures, hence limiting the number of storms that formed during the season. Although a semi-permanent anticyclone persisted through the summer of 1998, causing most of the storms to remain at sea, some storm did threaten the Baja California Peninsula due to a weakness in the anticyclone. Except for Hurricane Kay, all of the storms of the season originated from tropical waves.\n\nThe average typhoon season lasts year-round, with the majority of the storms forming between May and October. An average Pacific typhoon season features 26 tropical storms, 16 typhoons, and 9 super typhoons (unofficial category). It also features an average Accumulated Cyclone Energy (ACE) count of approximately 294; the basin is typically the most active basin for tropical cyclone formation.\n\nDuring the 1998 Pacific typhoon season, a total of 28 tropical depressions developed across the western Pacific basin. Of those 28 depressions, a total of 18 strengthened into tropical storms of which 9 further intensified into typhoons. The first tropical cyclone developed on May 28, marking the fourth latest start to any Pacific typhoon season on record, and the last one dissipated on December 22. The Philippine region also set a record: with only eleven storms forming or moving into its area of responsibility, PAGASA had its quietest season as of 2006. Overall inactivity was caused by an unusually strong La Ni\u00f1a, which also fueled a hyperactive Atlantic hurricane season that year.\n\nWith eleven depressions and eight tropical cyclones, this was one of the most active seasons in the ocean along with 1987, 1996, and 2005. The season caused a large loss of life, most of which was from one storm. Over 10,000 people were killed in India when Tropical Cyclone 03A brought a 4.9-metre (16 ft) storm surge to the Kathiawar Peninsula, inundating numerous salt mines. Total damages from the storm amounted to Rs. 120 billion (US$3 billion). Tropical Cyclone 01B killed at least 26 people and left at least 4,000 fishermen missing in eastern Bangladesh on May 20. A short lived depression in mid-October killed 122 people after triggering severe flooding in Andhra Pradesh. In November, Tropical Cyclone 06B killed six people and caused property damage worth BTN 880 million (US$20.7 million) in eastern India. An additional 40 people were killed and 100 fishermen were listed as missing after Tropical Cyclone 07B affected Bangladesh.", "doc_id": "8e77cdfc-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Impala", "document": "The impala or rooibok (Aepyceros melampus) is a medium-sized antelope found in eastern and southern Africa. The only extant member of the genus Aepyceros and tribe Aepycerotini, it was first described to European audiences by German zoologist Hinrich Lichtenstein in 1812. Two subspecies are recognised\u2014the common impala, and the larger and darker black-faced impala. The impala reaches 70\u201392 cm (28\u201336 in) at the shoulder and weighs 40\u201376 kg (88\u2013168 lb). It features a glossy, reddish brown coat. The male's slender, lyre-shaped horns are 45\u201392 cm (18\u201336 in) long.\n\nActive mainly during the day, the impala may be gregarious or territorial depending upon the climate and geography. Three distinct social groups can be observed: the territorial males, bachelor herds and female herds. The impala is known for two characteristic leaps that constitute an anti-predator strategy. Browsers as well as grazers, impala feed on monocots, dicots, forbs, fruits and acacia pods (whenever available). An annual, three-week-long rut takes place toward the end of the wet season, typically in May. Rutting males fight over dominance, and the victorious male courts female in oestrus. Gestation lasts six to seven months, following which a single calf is born and immediately concealed in cover. Calves are suckled for four to six months; young males\u2014forced out of the all-female groups\u2014join bachelor herds, while females may stay back.\n\nThe impala is found in woodlands and sometimes on the interface (ecotone) between woodlands and savannahs; it inhabits places near water. While the black-faced impala is confined to southwestern Angola and Kaokoland in northwestern Namibia, the common impala is widespread across its range and has been reintroduced in Gabon and southern Africa. The International Union for Conservation of Nature (IUCN) classifies the impala as a species of least concern; the black-faced subspecies has been classified as a vulnerable species, with fewer than 1,000 individuals remaining in the wild as of 2008.\n\nThe impala is the sole member of the genus Aepyceros and belongs to the family Bovidae. It was first described by German zoologist Martin Hinrich Carl Lichtenstein in 1812. In 1984, palaeontologist Elisabeth Vrba opined that the impala is a sister taxon to the alcelaphines, given its resemblance to the hartebeest. A 1999 phylogenetic study by Alexandre Hassanin (of the National Centre for Scientific Research, Paris) and colleagues, based on mitochondrial and nuclear analyses, showed that the impala forms a clade with the suni (Neotragus moschatus). This clade is sister to another formed by the bay duiker (Cephalophus dorsalis) and the klipspringer (Oreotragus oreotragus). An rRNA and \u03b2-spectrin nuclear sequence analysis in 2003 also supported an association between Aepyceros and Neotragus. According to Vrba, the impala evolved from an alcelaphine ancestor. She noted that while this ancestor has diverged at least 18 times into various morphologically different forms, the impala has continued in its basic form for at least five million years. Several fossil species have been discovered, including A. datoadeni from the Pliocene of Ethiopia. The oldest fossil discovered suggests its ancient ancestors were slightly smaller than the modern form, but otherwise very similar in all aspects to the latter. This implies that the impala has efficiently adapted to its environment since prehistoric times. Its gregarious nature, variety in diet, positive population trend, defence against ticks and symbiotic relationship with the tick-feeding oxpeckers could have played a role in preventing major changes in morphology and behaviour.\n\nThe impala is a medium-sized, slender antelope similar to the kob or Grant's gazelle in build. The head-and-body length is around 130 centimetres (51 in). Males reach approximately 75\u201392 centimetres (30\u201336 in) at the shoulder, while females are 70\u201385 centimetres (28\u201333 in) tall. Males typically weigh 53\u201376 kilograms (117\u2013168 lb) and females 40\u201353 kilograms (88\u2013117 lb). Sexually dimorphic, females are hornless and smaller than males. Males grow slender, lyre-shaped horns 45\u201392 centimetres (18\u201336 in) long. The horns, strongly ridged and divergent, are circular in section and hollow at the base. Their arch-like structure allows interlocking of horns, which helps a male throw off his opponent during fights; horns also protect the skull from damage.\n\nThe glossy coat of the impala shows two-tone colouration \u2013 the reddish brown back and the tan flanks; these are in sharp contrast to the white underbelly. Facial features include white rings around the eyes and a light chin and snout. The ears, 17 centimetres (6.7 in) long, are tipped with black. Black streaks run from the buttocks to the upper hindlegs. The bushy white tail, 30 centimetres (12 in) long, features a solid black stripe along the midline. The impala's colouration bears a strong resemblance to the gerenuk, which has shorter horns and lacks the black thigh stripes of the impala. The impala has scent glands covered by a black tuft of hair on the hindlegs. Sebaceous glands concentrated on the forehead and dispersed on the torso of dominant males are most active during the mating season, while those of females are only partially developed and do not undergo seasonal changes. There are four nipples.\n\nOf the subspecies, the black-faced impala is significantly larger and darker than the common impala; melanism is responsible for the black colouration. Distinctive of the black-faced impala is a dark stripe, on either side of the nose, that runs upward to the eyes and thins as it reaches the forehead. Other differences include the larger black tip on the ear, and a bushier and nearly 30% longer tail in the black-faced impala.\n\nThe impala has a special dental arrangement on the front lower jaw similar to the toothcomb seen in strepsirrhine primates, which is used during allogrooming to comb the fur on the head and the neck and remove ectoparasites.", "doc_id": "8e77cf64-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Alex_Lifeson", "document": "Aleksandar \u017divojinovi\u0107, OC (born 27 August 1953), known professionally as Alex Lifeson, is a Canadian musician, best known as the guitarist and backing vocalist of the progressive rock band Rush. In 1968, Lifeson co-founded the band that would later become Rush, with drummer John Rutsey and bassist and lead vocalist Jeff Jones. Jones was replaced by Geddy Lee a month later, and Rutsey was replaced by Neil Peart in 1974. Before the band was disbanded in 2018, Lifeson was the only continuous member who stayed in Rush since its inception, and along with bass guitarist/vocalist Geddy Lee, the only member to appear on all of the band's albums.\n\nWith Rush, Lifeson played electric and acoustic guitars, as well as other string instruments such as mandola, mandolin, and bouzouki. He also performed backing vocals in live performances as well as the studio albums Rush (1974), Presto (1989) and Roll the Bones (1991) and occasionally played keyboards and bass pedal synthesizers. Like the other members of Rush, Lifeson performed real-time on-stage triggering of sampled instruments. Along with his bandmates Geddy Lee and Neil Peart, Lifeson was made an Officer of the Order of Canada on 9 May 1996. The trio was the first rock band to be so honoured as a group. In 2013, he was inducted with Rush into the Rock & Roll Hall of Fame. Lifeson was ranked 98th on Rolling Stone's list of the 100 greatest guitarists of all time and third (after Eddie Van Halen and Brian May) in a Guitar World readers' poll listing the 100 greatest guitarists.\n\nThe bulk of Lifeson's work in music has been with Rush, although Lifeson has contributed to a body of work outside the band as well. Aside from music, Lifeson has been a painter, a licensed aircraft pilot, an actor, and the part-owner of a Toronto bar and restaurant called The Orbit Room.\n\nLifeson was born Alexandar \u017divojinovi\u0107 in Fernie, British Columbia. His parents, Nenad and Melanija \u017divojinovi\u0107, were Serb immigrants from Yugoslavia. He was raised in Toronto. His stage name of \"Lifeson\" is a semi-literal translation of the surname \u017divojinovi\u0107, which means \"son of life\" in Serbian. Lifeson's first formal music training was on the viola, which he abandoned for the guitar at the age of 12. His first guitar was a Christmas gift from his father, a six-string Kent classical acoustic which was later replaced by an electric Japanese model. During his adolescent years, he was influenced primarily by the likes of Jimi Hendrix, Pete Townshend, Jeff Beck, Eric Clapton, Jimmy Page, Steve Hackett, and Allan Holdsworth; he explained in 2011 that \"Clapton's solos seemed a little easier and more approachable. I remember sitting at my record player and moving the needle back and forth to get the solo in 'Spoonful.' But there was nothing I could do with Hendrix.\" In 1963, Lifeson met future Rush drummer John Rutsey in school. Both interested in music, they decided to form a band. Lifeson was primarily a self-taught guitarist with the only formal instruction coming from a high school friend in 1971 who taught classical guitar lessons. This training lasted for roughly a year and a half.\n\nOn New Year's Eve 2003, Lifeson, his son and his daughter-in-law were arrested at the Ritz-Carlton hotel in Naples, Florida. Lifeson, after intervening in an altercation between his son and police, was accused of assaulting a sheriff's deputy in what was described as a drunken brawl. In addition to suffering a broken nose at the hands of the officers, Lifeson was tased six times. His son was also tased repeatedly.\n\nOn 21 April 2005, Lifeson and his son agreed to a plea deal with the local prosecutor for the State's Attorney office to avoid jail time by pleading no contest to a first-degree misdemeanor charge of resisting arrest without violence. As part of the plea agreement, Lifeson and his son were each sentenced to 12 months of probation with the adjudication of that probation suspended. Lifeson acknowledged his subsequent legal action against both the Ritz-Carlton and the Collier County Sheriff's Office for \"their incredibly discourteous, arrogant and aggressive behaviour of which I had never experienced in 30 years of travel\". Although both actions were initially dismissed in April 2007, legal claims against the Ritz-Carlton were reinstated upon appeal and they were settled out of court on a confidential basis in August 2008. In his journal-based book Roadshow: Landscape with Drums \u2013 A Concert Tour by Motorcycle, Peart relates the band's perspective on the events of that New Year's Eve.", "doc_id": "8e77d072-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/1997%E2%80%9398_Chicago_Bulls_season", "document": "The 1997\u201398 NBA season was the Bulls' 32nd season in the National Basketball Association. The Bulls entered the season as the two-time defending NBA champions, and in the Finals, they met the Utah Jazz in a rematch from the prior year's NBA Finals and just like that year, they would go on to defeat the Jazz in six games to win their sixth championship in eight years and complete the franchise's second \"3-peat\".\n\nDuring the off-season, the Bulls acquired Scott Burrell from the Golden State Warriors. However, All-Star forward Scottie Pippen would miss the first half of the season due to an injured toe on his left foot sustained from the 1997 NBA Playoffs. Without Pippen, the Bulls started with a slow 9\u20137 record in November, but then would go on a 15\u20134 record until he returned in January. However, three-point specialist Steve Kerr went down with a knee injury in January, and only played just 50 games. Despite the injuries, the Bulls held a 34\u201315 record at the All-Star break. At midseason, the team traded Jason Caffey to the Golden State Warriors in exchange for David Vaughn. Vaughn would only play just three games with the Bulls before being waived on March 2. Also in early March, the team re-signed former Bulls reserve forward Dickey Simpkins, who was previously released by the Warriors, and played in the final 21 games of the regular season.[8] Despite the slow start, with the help of Scottie's return which was limited to just 44 games, the Bulls would post a 13-game winning streak between March and April, and still finish first place in the Central Division and Eastern Conference with a 62\u201320 record. The Bulls had the third best team defensive rating in the NBA.\n\nIn the playoffs, the Bulls swept the New Jersey Nets 3\u20130 in the Eastern Conference First Round, defeated the Charlotte Hornets 4\u20131 in the Eastern Conference Semi-finals, despite losing Game 2 at the United Center 78\u201376, and then the Indiana Pacers 4\u20133 in the Eastern Conference Finals en route to advance to the NBA Finals. In the Finals, they met the Utah Jazz in a rematch from the prior year's NBA Finals and just like last year, they would go on to defeat the Jazz in six games to win the championship. The championship was their sixth in eight years and completed the franchise's second \"3-peat\".\n\nThe season also saw Michael Jordan earn his fifth and final NBA Most Valuable Player Award, while being selected for the 1998 NBA All-Star Game, where he also won his third and final All-Star Game MVP Award. He once again led the league in scoring averaging 28.7 points, 5.8 rebounds and 1.7 steals per game, while being named to the All-NBA First Team, and NBA All-Defensive First Team, and also finished in fourth place in Defensive Player of the Year voting. In addition, Pippen averaged 19.1 points, 5.2 rebounds, 5.8 assists and 1.8 steals per game, and was selected to the All-NBA Third Team, and also to the All-Defensive First Team, while finishing in tenth place in Most Valuable Player voting, and rebound-specialist Dennis Rodman once again led the league in rebounding with 15.0 rebounds per game. Toni Kuko\u010d provided the team with 13.3 points per game, playing most of the season as the team's starting small forward in Pippen's absence, while Luc Longley averaged 11.4 points and 5.9 rebounds per game, and Ron Harper contributed 9.3 points and 1.3 steals per game.\n\nThis was Jordan's last season as a Bull, as he announced his second retirement after it was over. However, he did make a second comeback with the Washington Wizards in 2001. Following the season, Phil Jackson resigned as head coach, while Pippen was traded to the Houston Rockets, Rodman left for the Los Angeles Lakers as a free agent, Longley was dealt to the Phoenix Suns, Kerr signed with the San Antonio Spurs, Burrell signed with the New Jersey Nets, and Jud Buechler signed with the Detroit Pistons.\n\nBecause of this dismantling of the team, this was the last season for the Bulls dynasty that had headlined the NBA throughout the 1990s. What followed was a long rebuilding process between 1998 and 2004, and the Bulls did not return to the postseason until 2005.\n\nThis was the first time in the 1990s that the same two teams played each other in two consecutive finals. The Jazz had won both regular season match-ups, and many analysts predicted a hard-fought seven-game series. Predictions of a Jazz championship were strengthened with their game one victory in overtime in Utah. The Bulls would tie the series in game 2 putting together a fourth quarter run to silence the Delta Center and holding on to win 93\u201388, finally securing their first victory against Utah all season.\n\nThe Finals would move to Chicago with control of the series at stake in Game 3. Though anticipation was high, no one could have expected a blow-out of the proportions seen in Game 3. With a 96\u201354 triumph over Utah, the Bulls would help the Jazz set an embarrassing record for the lowest points scored in Finals history and biggest margin of defeat, while everyone on the Bulls scored. The Jazz would pull themselves together in Game 4 in a better attempt to tie the series, but lost 86\u201382.\n\nThe early Jazz series-lead seemed like a distant memory, a false indication of a tough series as they hit the floor for Game 5 behind 3\u20131. Chicago fans prepared for the last game they would host with the Jordan-led Bulls of the 1990s. But any notions of a championship at the United Center would be snuffed out when, with 0.8 seconds on the game, Michael Jordan airballed an off-balance 3 to the right of the basket giving the Jazz a narrow 83\u201381 win. The play might have been for Toni Kuko\u010d to shoot a three. With the series shifting back to Utah with a far more generous 3-2 Bulls advantage, the promise of another Chicago championship was not so certain.\n\nThe Chicago Bulls had never let a Finals series go to a Game 7.\n\nAs they arrived at the Delta Center for Game 6, things didn't look good for the Bulls. Scottie Pippen's back gave out when he dunked the opening basket of the game and he was slowed down and held to just 8 points. The Jazz suffered a bad break when the referees incorrectly nullified a Howard Eisley three-pointer that, replays showed, was clearly released just before the 24-second clock expired. In the 4th quarter, the Bulls closed the gap as Michael Jordan tallied many of his 45 overall points. Then things got worse for Chicago when John Stockton hit a clutch 3 with 41.9 seconds left to give Utah an 86\u201383 lead as the Delta Center crowd roared happily. Down by 3, the Bulls had one last chance to stay alive. Running perilously low on energy, it would be imperative for Chicago to win the series before the game went into OT, and also for the Bulls to avoid a Game 7 on the road when Scottie Pippen was so badly injured and their entire lineup was exhausted.\n\nAfter Michael Jordan made a quick layup to cut the Jazz lead to one, the Bulls needed to stop the Jazz from scoring again. When John Stockton passed the ball to Karl Malone, Michael Jordan stole the ball away and dribbled to the front. Guarding him was Bryon Russell, one of the Jazz's best perimeter defenders. Jordan drove inside the 3-point line, executed a quick cross-over, and drilled a 20-ft. jump shot to give the Bulls an 87\u201386 lead with 5.2 seconds left. After Utah took a timeout, Stockton's 3 hit the rim and bounced away, giving the Bulls their 6th title in 8 years. The famous winning shot has been immortalized in many records, as Jordan completed a perfect sextet: 6 NBA Finals, 6 championships, and 6 NBA Finals MVP trophies.", "doc_id": "8e77d202-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Neuengamme_concentration_camp", "document": "Neuengamme was a network of Nazi concentration camps in Northern Germany that consisted of the main camp, Neuengamme, and more than 85 satellite camps. Established in 1938 near the village of Neuengamme in the Bergedorf district of Hamburg, the Neuengamme camp became the largest concentration camp in Northwest Germany. Over 100,000 prisoners came through Neuengamme and its subcamps, 24 of which were for women. The verified death toll is 42,900: 14,000 in the main camp, 12,800 in the subcamps, and 16,100 in the death marches and bombings during the final weeks of World War II. Following Germany's defeat in 1945, the British Army used the site as an internment camp for SS and other Nazi officials. In 1948, the British transferred the land to the Free Hanseatic City of Hamburg, which summarily demolished the camp's wooden barracks and built in its stead a prison cell block, converting the former concentration camp site into two state prisons operated by the Hamburg authorities from 1950 to 2004. Following protests by various groups of survivors and allies, the site now serves as a memorial. It is situated 15 km southeast of the centre of Hamburg.\n\nIn 1937, Hitler declared five cities to be converted into F\u00fchrer cities (German: F\u00fchrerst\u00e4dte) in the new Nazi regime, one of which was Hamburg. The banks of the Elbe river of Hamburg, considered Germany's \"Gateway to the World\" for its large port, was to be redone in the clinker brick style characteristic of German Brick Expressionism.\n\nTo supply the bricks, the SS-owned company Deutsche Erd-und Steinwerke (DESt) (English: German Earth & Stone Works) purchased a defunct brick factory (German: Klinkerwerk) and 500,000 m\u00b2 of land in Neuengamme in September 1938.\n\nThe SS established the Neuengamme concentration camp on 13 December 1938 as a subcamp (German: Au\u00dfenlager) of the Sachsenhausen concentration camp and transported 100 prisoners from Sachsenhausen to begin constructing a camp and operate the brickworks.\n\nIn January 1940, Heinrich Himmler visited the site and deemed Neuengamme brick production below standard. In April 1940, the SS and the city of Hamburg signed a contract for the construction of a larger, more modern brick factory, an expanded connecting waterway, and a direct supply of bricks and prisoners for construction work in the city.\n\nOn 4 June, the Neuengamme concentration camp became an independent camp (German: Stammlager), and transports began to arrive from all over Germany and soon the rest of Europe.\n\nAs the death rate climbed between 1940 and 1942, a crematorium was constructed in the camp. In the same year, the civilian corporations Messap and Jastram opened armament plants on the camp site and used concentration camp prisoners as their workforces. After the war turned in Stalingrad, Nazis imprisoned millions of Soviets in the concentration camp system and Soviet POWs became the largest prisoner group in the Neuengamme camp and received brutal treatment by SS guards.\n\nThe first satellite camp of Dr\u00fctte was established in Salzgitter, and in less than a year close to 80 subcamps were constructed.\n\nBy the end of 1942, the death rate had risen to 10% per month. In 1943, the satellite camp on the Channel Island of Alderney was established. In July 1944, a special section of the camp was set up for prominent French prisoners, comprising political opponents and resistors against the German occupation of France. These prisoners included John William, who had participated in the sabotaging and bombing of a military factory in Montlu\u00e7on. William discovered his singing voice while cheering his fellow prisoners at Neuengamme and went on to a prominent career as a singer of popular and gospel music.\n\nBy the end of 1944, the total number of prisoners grew to approximately 49,000, with 12,000 in Neuengamme and 37,000 in the subcamps, including nearly 10,000 women in the various subcamps for women.", "doc_id": "8e77d2c0-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Newmarket_Canal", "document": "The Newmarket Canal, officially known but rarely referred to as the Holland River Division, is an abandoned barge canal project in Newmarket, Ontario. With a total length of about 10 miles (16 km), it was supposed to connect the town to the Trent\u2013Severn Waterway via the East Holland River and Lake Simcoe. Construction was almost complete when work was abandoned, and the three completed pound Locks, a swing bridge and a turning basin remain largely intact to this day.\n\nThe project was originally presented as a way to avoid paying increasing rates on the Northern Railway of Canada, which threatened to make business in Newmarket uncompetitive. The economic arguments for the canal were highly debatable, as the exit of the Waterway in Trenton was over 170 kilometres (110 mi) east of Toronto, while Newmarket was only 50 kilometres (31 mi) north of the city. Moreover, predicted traffic was very low, perhaps 60 tons a day in total, enough to fill two or three barges at most.\n\nFrom the start, the real impetus for the project was a way to bring federal money to the riding of York North, which was held by powerful Liberal member William Mulock. That it was a patronage project was clear to all, and it was under constant attack in the press and the House of Commons. As construction started in 1908, measurements showed there was too little water to keep the system operating at a reasonable rate through the summer months. From then on it was heaped with scorn in the press and became the butt of jokes and nicknames, including \"Mulock's Madness\".\n\nThe canal was one of the many examples of what the Conservative Party of Canada characterized as out-of-control spending on the part of the ruling Liberals. Their success in the 1911 federal election brought Robert Borden to power and changes at the top of the Department of Railways and Canals. They quickly placed a hold on ongoing construction, and a few weeks later, ended construction outright. Today, locals refer to it as \"The Ghost Canal\".\n\nThe canal route starts on the eastern arm of the Holland River, which splits off of the western arm just south of Cook's Bay. The eastern arm runs roughly southward through River Drive Park and into Holland Landing on the west side of town. At the southwestern corner of town it turns to the southeast, and after a short distance reaches the first lock, now under the bridge carrying \"old\" Yonge Street into town.\n\nThe river continues southeast for two kilometres where it reaches the second lock at Concession 2 (Bayview Avenue). Here it meets the inlet of the Rogers Reservoir, named for Timothy Rogers who settled the area, which provided a water buffer. At this point it turns south again for just over a kilometre before turning southwest into Newmarket through the third lock, which lies just north of the canal's endpoint.\n\nThe canal ended in a turning basin on the north side of Davis Drive in Newmarket, at what was then the northern end of the downtown area. The Holland River continues south from this point, with Main Street running parallel to it on the west bank, and the train line on the east bank. The city originally grew up along this north\u2013south axis. Fairy Lake, on the southern edge of downtown, is 1 km to the south of the turning basin.\n\nThe canal route remains largely intact. The southern portion is now paralleled by the Nokiidaa Bicycle Trail from Newmarket to Holland Landing. The turning basin was filled in during the 1980s, and now forms the eastern section of the parking lot for the Tannery Mall and the associated Newmarket GO Station.\n\nThe single-lane swing bridge over Green Lane was used until 2002, when it was replaced by a much larger four-lane bridge as part of the construction of the Newmarket Bypass. The original bridge structure itself was replaced by a footbridge a few years later, with the original swing mechanism relocated to one end of the new footbridge. Lock #1 in Holland Landing was used as the foundations for a bridge along a new routing for Yonge Street, lock #2 was likewise used for a bridge on 2nd Concession Road, while lock #3 in Newmarket now carries the bike trail over the river.", "doc_id": "8e77d3a6-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Woodstock,_Virginia", "document": "Woodstock is a town and the county seat of Shenandoah County, Virginia, United States. It has a population of 5,212 according to the 2017 census. Woodstock comprises 3.2 square miles of incorporated area of the town, and is located along the \"Seven Bends\" of the north fork of the Shenandoah River. While some tourism references list Woodstock as the fourth oldest town in Virginia, the area was sparsely settled and perhaps platted in 1752 or shortly thereafter, but the town was actually established by charter in 1761. While there are a number of Virginia towns closer to the eastern seaboard that claim earlier founding dates, Woodstock was one of the first towns west of the Blue Ridge.\n\nThe Massanutten Military Academy is located in Woodstock, as is the national headquarters of Sigma Sigma Sigma sorority. Woodstock is also home to the River Bandits of the Valley Baseball League, the Shenandoah County Public School's Central campus, and the Shenandoah County Fairgrounds.\n\nThe town was established by charter in March 1761 as a part of what was then Frederick County. It was originally formed from a land grant from Lord Fairfax, and founded as Muellerstadt (Miller Town) in 1752 by founder Jacob Muller (or \"Mueller\"). The town's charter was sponsored by George Washington in Virginia's House of Burgesses. Woodstock has been the County Seat of Shenandoah County, since Shenandoah County's formation in 1772.\n\nThe Shenandoah Valley region around Woodstock was settled by Pennsylvania Germans who migrated south down the natural route of the Shenandoah Valley in the mid 1700s. The majority of these German settlers tended small farms that grew crops other than tobacco, were not slaveholders and had Protestant faiths different from the established Anglican church in Virginia. They thus had different culture and beliefs than the English society that was prevalent on the eastern side of the mountains.\n\nThe Senedo people lived in the Shenandoah Valley around Woodstock, but they disappeared as a tribe prior to European settlement, possibly from attack by the Catawba to the south. By the time the German settlers arrived, few Native Americans lived in the Shenandoah Valley. Several later tribes hunted in the valley, among them the Shawnee, Occoneechee, Monocans and Piscataways and the powerful Iroquois Confederation, so while not inhabiting the area Indians were likely not an uncommon sight. The seven bends have locations associated with Indian mounds dating back to the Late Woodland Period (AD 900\u20131650) in the area of the river between Woodstock and Strasburg, Virginia. After 250 years of plowing by settlers, the mounds have largely disappeared from sight, though traces of them have been detected with aerial photography\n\nIn the early days, relations between Indians and settlers were friendly. In the 1750s settlers began to sense trouble when Indians moved further west, over the Allegheny Mountains, where they were under influence of the French. During the French and Indian War, the French encouraged Indian raiding parties against so-called \"English settlers\" though most settlers in the Woodstock area were likely peaceable Germans. In the 1760s, there was constant danger of Indian raids, with some atrocities and brutality. The last Indian raid in the area occurred in 1766, three years after the formal end of the French and Indian War, about two miles south of Woodstock.\n\nRoute 11, which runs through Woodstock, was originally an Indian trail that served as a route between the Catawba in the south and the Delawares in the north, who were warring rivals. This came to be known as the Indian Road, and was the main route for settlement and travel through the Shenandoah Valley. With many improvements, Route 11 has largely followed this route, which was later called the Great Wagon Road and then the Valley Pike. Jacob Muller apparently used this old trail in laying out the plans for the main street of what would become Woodstock. Muellerstadt was the early name for Woodstock.\n\nThe new village was established by an act in 1761, sponsored by George Washington. The town was renamed Woodstock at that time. George Washington was a member of the Virginia House of Burgesses, representing Frederick County (the Woodstock area was then part of Frederick County and would remain so until 1772.) The act of the General Assembly gave full credit to Jacob Muller for initiating the idea. Muller came from Germany in 1749 and had temporarily settled in Pennsylvania. By 1752 he obtained 400 acres from Lord Fairfax for the area that would eventually be included in the town limits of Woodstock. Muller settled in Narrow Passage near Woodstock, and in the next few years his holdings grew to something between 1200 and 2000 acres, and he proceeded to lay out a plan for the town, Mullerstadt. A few white settlers had preceded Muller, as the 1761 act establishing the town noted \"several persons are now living there\". It is realistic to assume this meant a scattering of log buildings. However, Muller's town plan was that referred to in the 1761 General Assembly act that established Woodstock.\n\nThere is no clear reason why the town's name was changed to Woodstock, though theories include it being renamed by Washington or perhaps for a wood stockade used by the community as shelter from Indian raids. Notwithstanding, Jacob Muller's town continued for many years to be known as Millerstown, or to German-language residents, Muellerstadt. During the years following the establishment of the town, Muller held a big land sale in which 40 parcels he plotted were purchased. Muller died in 1766, just four years after his land sales. Andrew Brewbaker, his son-in-law, became proprietor of his grant, supported by a board of trustees appointed by the General Assembly to govern the new town. This form of government continued until 1795, when the town was authorized to hold elections. Unfortunately, the Town Trustees appointed in 1761 left no records, so early history of Woodstock as a town cannot be determined with accuracy. There was also no local newspaper until 1817.", "doc_id": "8e77d4d2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Deal_or_No_Deal_(American_game_show)", "document": "Deal or No Deal is an American version of the international game show of Dutch origin of the same name. The show is hosted by Howie Mandel, and premiered on December 19, 2005, on NBC. The hour-long show typically aired at least twice a week during its run, and included special extended or theme episodes. The show started its fourth season on August 25, 2008, a day after NBC's coverage of the 2008 Beijing Olympics ended. A daily syndicated half-hour version of the show debuted on September 8, 2008, and continued for two seasons.\n\nThe game is primarily unchanged from the international format: a contestant chooses one briefcase from a selection of 26. Each briefcase contains a cash value from $0.01 to $1,000,000. Over the course of the game, the contestant eliminates cases from the game, periodically being presented with a \"deal\" from The Banker to take a cash amount to quit the game. Should the contestant refuse every deal, they are given the chance to trade the case they chose at the outset for the only one left in play at the time; they then win the amount in the selected case.\n\nSpecial variations of the game, including a \"Million Dollar Mission\" introduced in the third season, were also used, as well as a tie-in with a viewer \"Lucky Case Game\".\n\nThe show was a success for NBC, typically averaging from 10 to 16 million viewers each episode in the first season, although the subsequent seasons only averaged about 5\u20139 million viewers each episode. It has led to the creation of tie-in board, card, and video games, as well as a syndicated series played for smaller dollar amounts.\n\nThe show went on hiatus in early 2009, and its Friday night time slot was replaced with Mandel's other series Howie Do It. The network later announced on that Deal or No Deal would return on May 4, 2009, to air its remaining episodes. These remaining four were taped in September 2008, and aired on three consecutive Mondays, May 4, 2009, May 11, 2009, and the final two on May 18, 2009.\n\nOn December 3, 2018, the show returned to NBC as a holiday special with original host Howie Mandel. New episodes of the program began airing on CNBC on December 5, 2018. The show aired its final episode on August 7, 2019.\n\nThe contestant chooses one of 26 numbered briefcases at the start of the game. These cases, carried by twenty-six identically dressed female models, each hold a different cash amount from $0.01 to $1,000,000. On the stage is a video wall that displays the amounts still in play at any given moment. The contestant's chosen case is brought onto the stage and placed on a podium before them and the host.\n\nIn the first round, the contestant chooses six cases to eliminate from play, one at a time. Each case is opened as it is chosen, and the amount inside is removed from the board. After the sixth pick, a cordless telephone on the podium rings and the host answers it to speak with \"The Banker\", visible only as a silhouette, who sits in a skybox overlooking the studio. The Banker's face is never seen, and their voice is never heard. After the call ends, the host relays the Banker's offer to buy the contestant's case. The contestant can accept the offer and end the game by saying \"deal\" and pressing a red button on the podium, or reject it by saying \"no deal\" and closing a hinged cover over the button.\n\nEach time an offer is rejected, the contestant must play another round, eliminating progressively fewer cases: five in the second round, four in the third, three in the fourth, two in the fifth. Beyond the fifth round, the contestant eliminates one case at a time, receiving a new offer from the Banker after each. The ninth and final offer comes when there are only two cases left in play: the one originally chosen by the contestant and one other. If the contestant rejects this final offer, they may either keep the chosen case or trade it for the other. The contestant receives the amount in the case taken.\n\nThe Banker's offer is typically a percentage of the average of the values still in play at the end of each round. This percentage is small in the early rounds, but increases as the game continues and can even exceed 100% in very late rounds. At times, an offer includes a prize tailored to the contestant's interests, either in addition to cash or instead of it. Also, prizes are occasionally substituted for some of the cash amounts on the board. Starting with the Banker's offer in the second round, the contestant can bring a \"cheering section\" (e.g., friends, family members or colleagues) to the edge of the stage for advice on case selection and whether to accept offers. However, only the contestant's decisions are counted as part of the game.\n\nIf a contestant accepts one of the Banker's first eight offers, and if time permits, the host encourages the contestant to play through additional rounds to see what would have happened if they had not accepted the offer. If time runs short, the last value that was higher than the contestant's taken offer is eliminated in the proveout, the contestant wasn't going to win much, or if there are only two cases remaining, the host opens the contestant's case to see whether their deal is a good or bad one, and then all of the remaining cases are opened at once.", "doc_id": "8e77d608-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/2022_Sri_Lankan_political_crisis", "document": "The 2022 Sri Lankan political crisis is an ongoing[further explanation needed] political crisis in Sri Lanka due to the power struggle between President Gotabaya Rajapaksa and the Parliament of Sri Lanka. It is fueled by the anti-government protests and demonstrations by the public due to the economic crisis in the country. The anti-government sentiment across various parts of Sri Lanka has triggered a state of political instability the country has not seen since the countries history.\n\nThe political crisis began on 3 April 2022, after all 26 members of the Second Gotabaya Rajapaksa cabinet with the exception of Prime Minister Rajapaksa resigned en masse overnight. Critics said the resignation was not valid as they did not follow the constitutional protocol and thus deemed it a \"sham\", and several were reinstated in different ministries the next day. There were even growing calls on forming a caretaker government to run the country or for snap elections, but the latter option was deemed unviable due to paper shortages and concerns over election expenditure, which would often cost in billions.\n\nProtestors have taken to streets to show their anger and displeasure over the mismanagement of the economy by the government and the protestors urged the President Gotabaya to immediately step down for a political change; he refused to do so, later eventually fleeing to Singapore and resigning on 14 July. Main opposition Samagi Jana Balawegaya had determined to abolish the 20th amendment by bringing a private members Bill in order to scrap the executive powers of Executive Presidency.\n\nSri Lankans took to the streets calling on the President and the government to step down. Many young adults, including university students, took part in peaceful protests calling for a major overhaul of the system and urged lawmakers to pave way for youngsters to lead the country. Protestors also demanded the removal of the 20th amendment to the Sri Lankan Constitution, as well as the abolition of the Executive Presidency. Few protestors also urged all 225 MPs to go home to elect new faces in the parliament. During the protests, there were growing calls to elect educated academic people to parliament and also there were calls to reveal the net worth and assets of the politicians.\n\nPolitical instability grew with the resignation of 26 cabinet ministers on 3 April 2022. The resignations were deemed null and void, according to the provisions of Twentieth Amendment to the Constitution of Sri Lanka, as the ministers tendered their resignations to the Prime Minister instead of the President. The Sports and Youth Minister and Prime Minister Mahinda Rajapaksa's son Namal Rajapaksa, brothers Chamal Rajapaksa and Basil Rajapaksa also resigned.\n\nThe president immediately made major steps to form an all-party interim government and invited all the parties to form a new government as a temporary solution up until the 2022 Sri Lankan presidential election and next Sri Lankan parliamentary election in 2025. The all party interim government would still have both President and Prime Minister unchanged but the cabinet of ministers would have included members representing various parties. The main oppositions SJB and JVP declined the proposal and urged the entire government including the President to resign. There were rumours and speculations regarding the fact that Mahinda Rajapaksa would resign from his position as Prime Minister but the rumours were deemed false as it was revealed that Mahinda would stay in power.\n\nOn 18 April 2022, Gotabaya appointed a new 17 member cabinet despite the protests calling the entire government to resign including the president alongside all 225 MPs in parliament. Dinesh Gunawardena was appointed as Public Administration, Internal Affairs minister while Douglas Devananda was appointed as Fisheries minister, Kanaka Herath was appointed as Highways minister, Dilum Amunugama was appointed as Transport & Industries minister, Prasanna Ranatunga was appointed as Public security and tourism minister, Channa Jayasumana was appointed as Health minister, Nalaka Godahewa was appointed as Media minister, Pramitha Tennakoon was appointed as Ports and Shipping minister, Amith Thenuka Vidanagamage was appointed as Sports & Youth Affairs ministry, Kanchana Wijesekera was appointed as Power & Energy minister, Asanka Shehan Semasinghe was appointed as Trade & Samurdhi Development minister, Janaka Wakkumbura was appointed as Agriculture & Irrigation minister, Vidura Wickremanayake was appointed as Labour minister, Mohan Priyadarshana De Silva was appointed as Water supply minister, Ramesh Pathirana was appointed as Education & Plantation Industries, Wimalaweera Dissanayake was appointed as Widelife & Forest Resources Conservation minister and Ahamed Nazeer Zainulabdeen was appointed as Environment minister. In the new cabinet portfolio, female representation was completely excluded with all 17 ministers are being males.", "doc_id": "8e77d6ee-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Behind_These_Hazel_Eyes", "document": "\"Behind These Hazel Eyes\" is a song by American singer Kelly Clarkson for her second studio album, Breakaway (2004). It was written by Clarkson with the song's producers Max Martin and Dr. Luke. The song was released on April 12, 2005, as the second single from the album. Clarkson considered \"Behind These Hazel Eyes\" as one of her favorite songs and she once intended to name Breakaway after the song. \"Behind These Hazel Eyes\" is an uptempo song that incorporates crunchy guitars which are pulsated with driving beats and anthemic choruses; it narrates Clarkson's broken relationship with her ex-boyfriend.\n\n\"Behind These Hazel Eyes\" peaked at number six on the Billboard Hot 100 and stayed inside the top 10 in the chart for 15 weeks, a record for the longest time spent in the top 10 for a song that did not hit top five, until it was beaten by Rihanna's \"Needed Me\" in 2016. It also became Clarkson's first song to top the Adult Pop Songs chart. It was certified platinum from Recording Industry Association of America (RIAA) for selling over one million digital downloads. Elsewhere, the song charted in the top 10 in Australia, Austria, Ireland, Netherlands, New Zealand and the United Kingdom.\n\nThe song's accompanying music video was directed by Joseph Kahn and produced by Danyi Deats-Barrett. The concept of the video was conceived by Clarkson and depicts her as a bride who experiences some dream-like hints that her fianc\u00e9 is having an affair with a brunette ceremony attendee. The music video premiered online at MTV and it also received heavy rotation on Total Request Live. The song was performed live by Clarkson at numerous venues, including the Breakaway World Tour (2005) and the All I Ever Wanted Tour (2009).\n\n\"Behind These Hazel Eyes\" is a power ballad that was written by Clarkson, Max Martin, and Dr. Luke and produced by the latter two. According to the sheet music published at Musicnotes.com by Alfred Publishing, it is set in common time and has a moderate tempo of 90 beats per minute. It is composed in the key of F sharp minor with Clarkson's vocal range spanning over two octaves from F#3 to F#5. The bridge was the only part of the song that was written by Dr. Luke and Martin together with Clarkson face to face. The song begins with Clarkson wailing \"oh oh oh\" over a restless percussion.[10] In the first verse, the music becomes quiet to focus on Clarkson's vocal as she wails \"Seems like just yesterday/You were a part of me/I used to stand so tall/I used to be so strong/Your arms around me tight/Everything it felt so right/Unbreakable like nothing could go wrong.\" During the chorus, the sound of electronic guitar is dominant as she vocalizes \"Here I am/Once again/I\u2019m torn into pieces/Can\u2019t deny it/Can\u2019t pretend/Just thought you were the one/Broken up deep inside/But you won\u2019t get to see the tears I cry/Behind these hazel eyes.\" Gil Kaufman of MTV noticed that the song \"soared on crunchy guitars, driving beats and anthemic, agitated choruses.\"\n\nLyrically, the song narrates the story of a failed relationship which initially started off well. Clarkson regrets having allowed herself to be vulnerable to her ex-boyfriend and she is determined that despite the pain that she feels, he will not get the satisfaction of seeing her cry. Michael Paoletta of Billboard praised Clarkson's vocal, writing \"Clarkson simply delivers a loose, tour-de-force vocal that simmers alongside a steroid-charged musical backdrop that is fun, fast and furious.\" Scott Juba of The Trade praised the production of the song, writing \"The song\u2019s strong hook pulls listeners in and involves them in the lyrics without ever becoming gimmicky or manipulative.\" He also complimented Clarkson's vocal which \"oscillates between pain and defiance with near pinpoint accuracy.\"\n\nElizabeth Scott of Sky Living wrote, \"while Clarkson is doing well musically, her love life still hasn't picked up and she is heartbroken once again. I'm sure the thought of another top ten hit might cheer her up!\" Scott Juba of The Trades considered \"Behind These Hazel Eyes\" as the highlight of the album, writing \"Now that Clarkson is a few years older than she was when she recorded her first album, she brings more authenticity to relationship songs.\" Evan Sawdey of PopMatters compared \"Don't Let Me Stop You\" (2009) with \"Behind These Hazel Eyes\" saying that the former \"may sound like another rewrite of an older Clarkson hit (in this case, \"Behind These Hazel Eyes\"), but the observational lyrics about a questionable relationship are what ultimately makes the whole thing click.\" Charles Merwin of Stylus Magazine felt that the song should sell records more because \"the entire musical backing drops out to let Clarkson\u2019s voice through to live or die on its own.\" Pam Avoledo of Blogcritics believed that \"Behind These Hazel Eyes\" was superior to the writing of \"Since U Been Gone\", commenting that \"It's punchier, well-written and gives Clarkson a chance to show off her vocal skills without the trendy haughtiness.\" Joe Cross of Cox Communications thought that \"Behind These Hazel Eyes\" was a decent follow-up to \"Since U Been Gone\", saying \"It's no \"Since U Been Gone\" which is just a pop-rock juggernaut, but as follow-ups go, it's not too shabby. Clarkson's down-home everything (well, mostly her looks) sells these little heartbreak haikus exceptionally well.\" He also listed \"Behind These Hazel Eyes\" as one of the 40 songs that defined the summer of 2005. The same sentiment was echoed by Robert Copsey of Digital Spy who considered the song as Clarkson's second best single after \"Since U Been Gone\", writing \"It proved a slow burner at the time of release, but this track's greatness continues to be realised over time.\"\n\n\"Behind These Hazel Eyes\" was listed at number five on Billboard magazine's list of Songs of the Summer of 2005. In 2015, the same publication ranked the song at number four on its list of Top 100 'American Idol' Hits of All Time. It also appeared at number three on the list of Kelly Clarkson's Top 15 Biggest Billboard Hot 100 Hits. Chris Kal of WKNS ranked \"Behind These Hazel Eyes\" at number four in his list of \"Top 10 Summer Songs From 2005\". Sam Lamsky of PopCrush described the song as \"a surefire fan favorite\" and ranked it at number nine in his list of \"Top 10 Kelly Clarkson songs\". Bill Lamb of About.com put the song at number 62 on his list of \"Top 100 Pop Songs of 2005\". The song was nominated in the category for Song of the Year: Mainstream Hit Radio in the 2005 Radio Music Awards. At the 24th ASCAP Pop Music Awards, the song was honored with the Most Performed Songs award. In January 2010, \"Behind These Hazel Eyes\" was the fifth most played song of the last decade by American Idol performers. According to Nielsen Broadcast Data Systems, the song has been played 513,149 times through the week ending March 24, 2010.", "doc_id": "8e77d8e2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Richard_Ayoade", "document": "Richard Ellef Ayoade (born 23 May 1977) is a British actor, comedian, broadcaster and filmmaker. He is best known for his role as socially awkward IT technician Maurice Moss in Channel 4 sitcom The IT Crowd (2006\u20132013), for which he won the 2014 BAFTA for Best Male Comedy Performance.\n\nFrom 1998 to 1999, Ayoade was the president of the Footlights club whilst a student at the University of Cambridge. He and Matthew Holness debuted their respective characters Dean Learner and Garth Marenghi at the Edinburgh Festival Fringe in 2000, bringing the characters to television with Garth Marenghi's Darkplace (2004) and Man to Man with Dean Learner (2006). He appeared in the comedy shows The Mighty Boosh (2004\u20132007) and Nathan Barley (2005), before becoming widely known for his role in The IT Crowd. After directing music videos for Kasabian, Arctic Monkeys, Vampire Weekend, and Yeah Yeah Yeahs, he wrote and directed the comedy-drama film Submarine (2010), an adaptation of the 2008 novel by Joe Dunthorne. He co-starred in the American science fiction comedy film The Watch (2012) and his second film as a writer and director, the black comedy The Double (2013), drew inspiration from Fyodor Dostoevsky's novella of the same title.\n\nAyoade has frequently appeared on panel shows, mostly prominently on The Big Fat Quiz of the Year, and served as a team captain on Was It Something I Said? (2013). He presented the factual shows Gadget Man (2013\u20132015), its spin-off Travel Man (2015\u20132019), and the revival of The Crystal Maze (2017). He has also voiced characters in a number of animated projects, including the films The Boxtrolls (2014), Early Man (2018), The Lego Movie 2: The Second Part (2019), Soul (2020), and The Bad Guys (2022), as well as the series Strange Hill High (2013\u20132014), Apple & Onion (2018\u20132021), and Disenchantment (2021).\n\nAyoade has written three comedic film-focused books: Ayoade on Ayoade: A Cinematic Odyssey (2014), The Grip of Film (2017), and Ayoade on Top (2019). He is currently writing two children's books: The Book That No One Wanted to Read (2022)[4] and a picture book called The Fairy Tale Fan Club (TBD).\n\nIn February 2006, Ayoade began playing technically brilliant but socially awkward IT technician Maurice Moss in the sitcom The IT Crowd on Channel 4, appearing with Chris O'Dowd, Katherine Parkinson, Chris Morris, and later on, Matt Berry. The series' creator Graham Linehan wrote the part specifically for Ayoade. In 2008, Ayoade won the award for an outstanding actor in a television comedy series at the Monte-Carlo Television Festival for his performance. In 2009, Ayoade co-starred with Joel McHale in the pilot for an American version of The IT Crowd, reprising his role with the same appearance and personality; however, no series was commissioned, and the pilot never aired. The original The IT Crowd ran for four seasons until 2010, with a special airing in 2013, for which Ayoade won a BAFTA for Best Male Comedy Performance.\n\nIn 2007, he directed the music videos for the songs \"Fluorescent Adolescent\" by Arctic Monkeys and Super Furry Animals's \"Run-Away\", which starred Matt Berry. The former received a UK Music Video Award nomination, attributed by Ayoade only to the song being \"so good\". Ayoade has frequently appeared as a panellist on The Big Fat Quiz of the Year, often with Noel Fielding, making his first appearance on The Big Fat Anniversary Quiz in 2007, which marked Channel 4's 25th anniversary.\n\nIn 2008, Ayoade directed the music videos for two Vampire Weekend singles: \"Oxford Comma\", filmed in one long take,[10] and \"Cape Cod Kwassa Kwassa\". That year he also directed videos for The Last Shadow Puppets songs \"Standing Next to Me\" and \"My Mistakes Were Made for You\", the latter of which was inspired by Federico Fellini's Toby Dammit. He directed a live Arctic Monkeys DVD, At the Apollo (2008), recorded at the Manchester Apollo on super 16mm film. It was previewed at Vue cinemas across the UK in October 2008 and released on DVD the next month. Ayoade was featured in Paul King's 2009 film Bunny and the Bull, playing an extremely boring museum tour guide. That year he also directed two music videos for the Arctic Monkeys, \"Crying Lightning\" and \"Cornerstone\", and videos for Kasabian's \"Vlad the Impaler\", starring Fielding, and \"Heads Will Roll\" by the Yeah Yeah Yeahs.\n\nIn 2010, Ayoade made his debut directorial feature, Submarine, a coming-of-age comedy-drama he adapted from Joe Dunthorne's 2008 novel of the same name. The film stars newcomers Craig Roberts and Yasmin Paige with Sally Hawkins, Noah Taylor, and Paddy Considine. It follows Welsh teenager Oliver Tate (Roberts) as he becomes infatuated with a classmate (Paige) and the turmoil of his parents' failing relationship. Produced by Warp Films and Film4, it premiered at the 35th Toronto International Film Festival in September 2010, had a general release in the UK in March 2011, and was released in June in the US after being picked up by the Weinstein Company for North America. Arctic Monkeys and The Last Shadow Puppets frontman Alex Turner contributed five original songs to the soundtrack, inspired by Simon & Garfunkel's music in The Graduate (1967). The film was positively received by critics, with The Guardian critic Peter Bradshaw calling Ayoade a \"tremendous new voice in British film\". Ayoade was nominated for a BAFTA for Outstanding Debut by a British Writer, Director or Producer at the 65th British Academy Film Awards.", "doc_id": "8e77da36-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/COVID-19_vaccination_in_South_Korea", "document": "COVID-19 vaccination in South Korea is an ongoing immunization campaign against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), the virus that causes coronavirus disease 2019 (COVID-19), in response to the ongoing pandemic in the country.\n\nAs of 5 July 2021, due to vaccine shortages, the vaccination rate has been slowed down since 20 June. The vaccination rate remains 29% for more than two weeks. According to JoongAng Ilbo, as of 5 July, the remaining amount of the COVID-19 vaccine is 1.8 million doses, including 1.4 million from Pfizer.\n\nOn 6 July 2021, it is reported that South Korea has signed a deal with Israel to borrow 700,000 expiring doses of the Pfizer-BioNTech's vaccine. South Korea will return the same amount of vaccines to Israel around September or October of this year.\n\nOn 29 November 2021, President Moon Jae-in urged the rapid administration of booster shots against COVID-19, in response to an increased number of severe cases and deaths following the easing of anti-virus rules.\n\nOn 10 February 2021, South Korea granted its first approval of a COVID-19 vaccine to Oxford\u2013AstraZeneca, allowing the two-shot regimen to be administered to all adults, including the elderly. The approval came with a warning, however, that consideration is needed when administering the vaccine to individuals over 65 years of age due to limited data from that demographic in clinical trials.\n\nOn 14 April 2021, The additional 250,000 doses of Pfizer/BioNTech vaccines arrived in the country.\n\nOn 3 June 2021, the United States donated one million doses of Johnson & Johnson's vaccine to South Korea. The United States initially announced to donate 550,000 doses to South Korean troops working in close contact with American forces.\n\nOn 19 August 2021, Romania decided to donate 450,000 expiring Moderna vaccines to South Korea.\n\nAstraZeneca signed a deal with South Korea's SK Bioscience to manufacture its vaccine products. The collaboration calls for the SK Bioscience to manufacture AZD1222 for local and global markets. The World Health Organization approved AstraZeneca's COVID-19 vaccine for emergency use on February. The initial approval covers doses produced by AstraZeneca and South Korea's SK Bioscience.\n\nSouth Korea's Korus Pharm has formed a consortium to produce Russia's Sputnik V COVID-19 vaccine. the consortium will produce 500 million doses of the vaccine. However, The Sputnik V doses manufactured in South Korea are not for domestic use. The vaccine is to be exported to Russia and UAE.\n\nNovavax will license out its NVX-CoV2373 vaccine technology to SK Bioscience for contract manufacturing purposes. SK Bioscience will manufacture 40 million doses of Novavax vaccines.\n\nOn 8 September 2022, SK Bioscience submitted investigational new drug for GBP510 COVID-19 vaccine candidate to Korea Ministry of Food and Drug Safety, for Phase III clinical trial. SK Bioscience plans its Phase III trial in form of comparative effectiveness clinical trial, targeting 4,000 people in South Korea.", "doc_id": "8e77daf4-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/%C4%90%E1%BA%A1i_Vi%E1%BB%87t%E2%80%93Lan_Xang_War_(1479%E2%80%931484)", "document": "The \u0110\u1ea1i Vi\u1ec7t\u2013Lan Xang War of 1479\u201384, also known as the White Elephant War, was a military conflict precipitated by the invasion of the Lao kingdom of Lan Xang by the Vietnamese \u0110\u1ea1i Vi\u1ec7t Empire. The Vietnamese invasion was a continuation of L\u00ea Th\u00e1nh T\u00f4ng's expansion, by which \u0110\u1ea1i Vi\u1ec7t had conquered the kingdom of Champa in 1471. The conflict grew into a wider conflagration involving the Ai-Lao people from Sip Song Chau Tai along with the Mekong river valley Tai peoples from the Yuan kingdom of Lan Na, L\u00fc kingdom Sip Song Pan Na (Sipsong Panna), to Muang along the upper Irawaddy river. The conflict ultimately lasted approximately five years growing to threatened the southern border of Yunnan and raising the concerns of Ming China. Early gunpowder weapons played a major role in the conflict, enabling \u0110\u1ea1i Vi\u1ec7t's aggression. Early success in the war allowed \u0110\u1ea1i Vi\u1ec7t to capture the Lao capital of Luang Prabang and destroy the Muang Phuan city of Xiang Khouang. The war ended as a strategic victory for Lan Xang, as they were able to force the Vietnamese to withdraw with the assistance of Lan Na and Ming China. Ultimately the war contributed to closer political and economic ties between Lan Na, Lan Xang, and Ming China. In particular, Lan Na's political and economic expansion led to a \"golden age\" for that kingdom.\n\nFor centuries before the L\u00ea dynasty, the Vietnamese and Lao polities existed side by side and frequently interacted. The Vietnamese chronicles records growing clashes between various Tai polities with the Viet court in the 1320s and 1330s, specifically the Ng\u01b0u H\u1ed1ng of Sip Song Chau Tai and the Ailao of Houaphanh and Vientiane. A Vietnamese inscription in Laos, dated 1336, discovered in 1960s by Emile Gaspardone, concerns the defeat of Vietnamese army led by Emperor Tr\u1ea7n Minh T\u00f4ng in a battle against the Ailao chief Souvanna Khamphong, the grandfather of Fa Ngum, in the previous year. In the 15th century, the number of Tai speaking people around \u0110\u1ea1i Vi\u1ec7t was close in number to those speaking Viet. The Ming census of 1417 showed that there were 162,559 households, while Muang Phuan had 90,000 households, according to the Vietnamese chronicle. Adding the population of Lan Xang, a larger polity of the same period would have made the Viet-speaking people a minority in the region. In fact, contemporary records from Lao, Vietnamese and Chinese sources suggest that the central Lao and central Vietnam area during the 14th and 15th century would have been relatively densely populated, more so than the coastal areas of the time.\n\nDuring the Ming occupation of Vietnam (1406-1427), the Chinese subdued some principalities around the established \u0110\u1ea1i Vi\u1ec7t territory. Early L\u00ea dynasty expeditions to the northwest border of \u0110\u1ea1i Vi\u1ec7t further sought to extend control of the area. L\u00ea L\u1ee3i led two \u201cpunitive expeditions\u201d (chinh) in the Black river area in 1423 and 1433. His successors led similar expeditions in 1434, 1437, 1439, 1440 and 1441, and another two in 1440 and 1448 against the tribes of the Tuyen Quang area. The Vietnamese-Yunnan border was clearly the main focus of the L\u00ea dynasty strategic and territorial efforts in the region. The most likely intention was to subdue local Tai-speaking groups and safeguard the transport of copper for the purpose of making firearms. By the end of the 1440s the northeast and northwest borders of \u0110\u1ea1i Vi\u1ec7t were basically settled and under firm Vietnamese control. By 1475, Yunnan became a preferred tribute route to China.\n\nThe terrain of the territory in which the conflict took place was mountainous, ranging from the Annamese Cordillera to the western frontier of \u0110\u1ea1i Vi\u1ec7t. The western areas were characterized by river valleys controlled by diverse ethnic groups. First was the Black river, running parallel to the Red river on its south-west, and Sipsong Chu Tai. To the south were the valleys of the Hua Phan and the Ai-Lao, reaching into the upper valleys of streams that ran east through the Vietnamese lowlands to the sea. Further south were other valleys of the Cam peoples, and the Phuan (Bon-man) of Xiang Khuoang. West of these highland valleys were more valleys that reached towards the great valley of the Mekong river, where Lan Xang (Lao-qua) was located with its capital in Luang Prabang.\n\nVietnamese expeditions in the 1430s and 1440s were characteristically attempts to hold down active groups of Tai in the scattered valleys west of Vietnamese territory. By the 1460s, the L\u00ea dynasty, in connection with nearby Tai chieftains, had been able to establish a series of stable positions from north to south, from the Black river down to Xieng Khouang along the western frontier of \u0110\u1ea1i Vi\u1ec7t. By the time L\u00ea Th\u00e1nh T\u00f4ng invaded, there would have been a vague sense of a maze of mountain valleys, with the major threat of Lan Xang beyond them. Vietnamese maps were of little help as they did not extend far into the mountains. Tactically, \u0110\u1ea1i Vi\u1ec7t had veteran generals from fringe areas of the Tai world and had fought in various nearby valleys over decades. Their knowledge of the nearby terrain, as well as of the general ecological pattern, would have been of significant use in battlefield decisions throughout Tai territory.\n\nThe Xiang Khouang plateau is a western extension of the Annamese Cordillera, drained principally by the Ngum and Ngiap rivers to the south and the Khan river to the north, all of which are Mekong river tributaries. The area is also referred to as \u201cMuang Phuan\u201d or \u201ccountry of the Phuan\u201d since the majority population of the area is Tai Phuan a subgroup of Lao Loum. The principal city of the region was Xiang Khouang, which together with Luang Prabang (Xiang Dong Xiang Thong or Muang Sua), Vientiane (Viang Chan Viang Kham), and Sikhottabong constituted the major power centers of Lan Xang. Throughout its history, the region has been of significant military and commercial importance. In the 15th century, the Phuan region most likely served as one of the main sources of cattle for Vietnamese peasants on the coast. The capital, Xieng Khuang, and surrounding plain were well suited for rice cultivation with excellent forage for cattle and dependable water supplies from mountain streams.", "doc_id": "8e77dca2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Joseph_Peter_Wilson", "document": "Joseph Peter \"Joe Pete\" Wilson (May 22, 1935 \u2013 September 13, 2019) was an American Olympic cross-country skier, who skied for the U.S. in cross-country at the 1960 Winter Olympics and later became a well-known skiing administrator in the United States. Wilson also in collaboration authored several books on cross-country skiing, all co-written by William J. Lederer. Wilson set up the cross-country ski area at the Trapp Family Lodge in Stowe, Vermont \u2013 the lodge established by the Trapp family of The Sound of Music fame. In 1973, Wilson organized a meeting of 25 ski areas and established the National Ski Touring Operators' Association. Wilson was named as its first President from 1973-1977. After several name changes it is now called the Cross Country Ski Areas Association (CCSAA). CCSAA is an international association of U.S. and Canadian cross-country ski areas. Wilson is also known for having set up an inn in Keene, New York, the Bark Eater Inn, and developing the ski trails around the inn.\n\nBorn in Lake Placid, New York to Gordon H. Wilson and Anna L. Wilson, Joe Pete spent his summers on his family farm in Keene, New York. In 1953 he graduated from Lake Placid High School where he was a Ski Meister Skier for four years. In 1954 he attended Vermont Academy under Warren Chivers. In 1958 he graduated from St. Lawrence University where he competed in cross country, Nordic combined, and Ski Meister under Otto Scheibs. He was used for team Alpine scoring only when necessary. He was elected Captain of the team for two years. As skiing started to grow in popularity in the U.S. in the 1940s and 1950s, colleges began including ski racing in their athletic programs. Since the sport was so new, college coaches had to use the four best skiers they had in order to qualify as a team. Each of the four did the best he could in his specialty of either cross country, jumping, downhill or slalom. A four event skier was the rare athlete that could place high in all four disciplines. Thus was born the Ski Meister Skier.\n\nAfter leaving the U.S. Team, Wilson returned to Lake Placid and the family business. He volunteered coaching high school kids throughout the Adirondacks. He was eager to get coaches and skiers tuned into a great way of life.\n\nHe was also analyzing potential locations in the Lake Placid area to establish a U.S., Oslo style \"Holmenkol\". But the time was not right. He predicted at the time it would be at least ten years before there would be enough interest in the U.S. to support such an idea! Consequently, he was more than mildly surprised when he was contacted by the head of the New York State. Forest Rangers, Mr. William Petty, to research the Mount Van Hoevenberg area, with the idea in mind of creating cross country trails in a park type atmosphere. Since the bobsled run was already there, they had substantial land holdings there.\n\nWilson had developed a reputation in his late teens for his knowledge of the woods, his logging abilities, road building capabilities, and knowledge of heavy equipment. He eagerly took to the job. He spent two months tramping, judging, and recording his notes. His only concern at the time was if there would be adequate elevation change to comply with international rules. Subsequent land purchases solved that problem. As a result of the efforts required to hold the 1980 Olympics it became next to Holmenkollen, the premier cross country ski center in the world.\n\nFrom 1959 to 1963, Wilson was a lieutenant in the United States Army. He was assigned to the U.S. Biathlon Team Training at Fort Richardson, in Anchorage, Alaska. He spent his entire four-year service career competing in cross-country and in Biathlon for the U.S. in Europe and with the U.S. Army Marksmanship Team.\n\nUpon arrival at Fort Richardson, Wilson was shocked to discover there were absolutely no training facilities available for Biathlon. No trails, and most importantly, no shooting range. There were few people who knew what biathlon entailed. He convinced the U.S. Army Corps of Engineers to deliver a new caterpillar D-8. Wilson and his two teammates, Dick Taylor and Peter Lahdenpera, former college racing competitors, and the only three skiers representing the entire Nordic/biathlon team at the time, built a complex biathlon shooting range which was used for the next 12 years, until the U.S. Army stopped financing the U.S. Biathlon effort.\n\nIn 1959 Wilson placed first in the Olympic pre-trials in the 15k, also known as the North American Nordic championships-Squaw Valley.[3] This placing set Joe Pete up as a major U.S. skier due to the number of U.S. and foreign competitors in the race. International Olympic Committee Rules require that a major International Competition be held in all events prior to an Olympic Competition, usually scheduled one year prior as a trial run to test the complex systems involved.\n\nNordic skiing is the poor cousin of Alpine (Downhill) skiing, which is so popular in the U.S.\n\nThe individual disciplines involved in the Nordics are cross-country ski racing, ski jumping, combined cross-country and jumping, biathlon- cross-country skiing, and rifle marksmanship!\n\nWilson was a member of the U.S. Nordic Ski Team Competing in Squaw Valley, California in 1960. He skied the 30K. Finishing 43rd, the reverse of the number on his racing bib. He later commented he somehow ended up with the wrong bib! Based on his results in the pre-Olympics, on the same courses in 1959, he should have placed much higher.\n\nIn 1961 and 1962, Wilson was on the U.S. Team racing in Europe, including Scandinavia, in cross-country and in biathlon. In 1962 he finished tenth in Falun, Sweden: in their National Championships among 900 competitors, a significant placing for an American at that time. Still impressive even today. This would be the equivalent to what is referred to today as a World Cup. His two teammates also had impressive performances. These placings remain the highest ever posted by a U.S. Skier to this day.", "doc_id": "8e77ddd8-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Emirate_of_Abu_Dhabi", "document": "The Emirate of Abu Dhabi is one of seven emirates that constitute the United Arab Emirates (UAE). It is by far larger in size than the rest of the federation combined (67,340 km2 (26,000 sq mi)), accounting for approximately 87 percent of its total land area. Abu Dhabi also has the second-largest population of the seven emirates. In June 2011 this was estimated to be 2,120,700 people, of which 439,100 people (less than 21%) were Emirati citizens. The city of Abu Dhabi, after which the emirate is named, is the capital of both the emirate and federation.\n\nIn the early 1970s, two important developments influenced the status of the Emirate of Abu Dhabi. The first was the establishment of the United Arab Emirates in December 1971, with Abu Dhabi as its political and administrative capital. The second was the sharp increase in oil prices following the October 1973 War, which accompanied a change in the relationship between the oil countries and foreign oil companies, leading to a dramatic rise in oil revenues. Abu Dhabi's Gross Domestic Product (GDP) estimates, in 2014, amounted to (EUR 0.24 tril.) AED 960 billion at current prices. Mining and quarrying (includes crude oil and natural gas) account for the largest contribution to GDP (58.5 per cent in 2011). Construction-related industries are the next largest contributor (10.1 per cent in 2011). GDP grew to AED 911.6 billion in 2012, or over US$100,000 per capita. In recent times, the Emirate of Abu Dhabi has continuously contributed around 60 per cent of the GDP of the United Arab Emirates, while its population constitutes only 34 per cent of the total UAE population according to the 2005 census.\n\nParts of Abu Dhabi were settled millennia ago, and its early history fits the nomadic herding and fishing pattern typical of the broader region. The Emirate shares the historical region of Al-Buraimi or Tawam (which includes modern-day Al Ain) with Oman, and is demonstrated to have been inhabited for over 7000 years. Modern Abu Dhabi traces its origins to the rise of an important tribal confederation, the Bani Yas, in the late 18th century, which also assumed control of Dubai. In the 19th century, the Dubai and Abu Dhabi branches parted ways.\n\nInto the mid-20th century, the economy of Abu Dhabi continued to be sustained mainly by camel herding, production of dates and vegetables at the inland oases of Al-Ain and Liwa, and fishing and pearl diving off the coast of Abu Dhabi city, which was occupied mainly during the summer months. Most dwellings in Abu Dhabi city were, at this time, constructed of palm fronds (barasti), with the wealthier families occupying mud huts. The growth of the cultured pearl industry in the first half of the twentieth century created hardship for residents of Abu Dhabi as pearls represented the largest export and main source of cash earnings.\n\nIn 1939, Sheikh Shakhbut Bin-Sultan Al Nahyan granted petroleum concessions, and oil was first found in 1958. At first, oil money had a marginal impact. A few low-rise concrete buildings were erected, and the first paved road was completed in 1961, but Sheikh Shakbut, uncertain whether the new oil royalties would last, took a cautious approach, preferring to save the revenue rather than investing it in development.\n\nHis brother, Sheikh Zayed bin Sultan Al Nahyan, saw that oil wealth had the potential to transform Abu Dhabi. The ruling Nahyan family decided that Sheikh Zayed should replace his brother as ruler and carry out his vision of developing the country. On August 6, 1966, with the assistance of the British, Zayed became the new ruler.\n\nWith the announcement by the UK in 1968 that it would withdraw from the area of the Persian Gulf by 1971, Sheikh Zayed became the main driving force behind the formation of the UAE. After the Emirates gained independence in 1971, oil wealth continued to flow to the area, and traditional mud-brick huts were rapidly replaced with banks, boutiques and modern highrises.\n\nThe United Arab Emirates is located in the oil-rich and strategic Arabian or Persian Gulf region. It adjoins the Kingdom of Saudi Arabia and the Sultanate of Oman.\n\nAbu Dhabi is located in the far west and southwest part of the United Arab Emirates along the southern coast of the Persian Gulf between latitudes 22\u00b040' and around 25\u00b0 north and longitudes 51\u00b0 and around 56\u00b0 east. It borders the emirate of Dubai and emirate of Sharjah to its north.\n\nThe total area of the Emirate is 67,340 square kilometres (26,000 square miles), occupying about 87% of the total area of the UAE, excluding islands. The territorial waters of the Emirate embrace about 200 islands off its 700 km (430 mi) coastline. The topography of the Emirate is dominated by low-lying sandy terrain dotted with sand dunes exceeding 300 m (980 ft) in height in some areas southwards. The eastern part of the Emirate borders the western fringes of the Hajar Mountains. Hafeet Mountain, Abu Dhabi's highest elevation and sole mountain, rising 1,100\u20131,400 m (3,600\u20134,600 ft), is located south of Al-Ain City.\n\nLand cultivation and irrigation for agriculture and forestation over the past decade has increased the size of \"green\" areas in the emirate to about 5% of the total land area, including parks and roadside plantations. About 1.2% of the total land area is used for agriculture. A small part of the land area is covered by mountains, containing several caves. The coastal area contains pockets of wetland and mangrove colonies. Abu Dhabi also has dozens of islands, mostly small and uninhabited, some of which have been designated as sanctuaries for wildlife.\n\nThe emirate is located in the tropical dry region. The Tropic of Cancer runs through the southern part of the Emirate, giving its climate an arid nature characterised by high temperatures throughout the year, and a very hot summer. The Emirate's high summer (June to August) temperatures are associated with high relative humidity, especially in coastal areas. Abu Dhabi has warm winters with occasionally low temperatures. The air temperatures show variations between the coastal strip, the desert interior and areas of higher elevation, which together make up the topography of the Emirate.\n\nAbu Dhabi receives scant rainfall but totals vary greatly from year to year. Seasonal northerly winds blow across the country, helping to ameliorate the weather, when they are not laden with dust, in addition to the brief moisture-laden south-easterly winds. The winds often vary between southerly, south-easterly, westerly, northerly and northwesterly. Another characteristic of the Emirate's weather is the high rate of evaporation of water due to several factors, namely high temperature, wind speed, and low rainfall.\n\nThe oasis city of Al Ain, about 150 km (93 mi) away, bordering Oman, regularly records the highest summer temperatures in the country; however, the dry desert air and cooler evenings make it a traditional retreat from the intense summer heat and year-round humidity of the capital city.", "doc_id": "8e77df22-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Stravaganza_(series)", "document": "Stravaganza is a series of novels written by children's author Mary Hoffman. The books are set alternatively between Islington, an area of London, England, and various cities in Talia, an alternate version of Renaissance Italy.\n\nThe series originally consisted of a trilogy of books: City of Masks, City of Stars, and City of Flowers. The popularity of the trilogy allowed the series to be extended for three more books: City of Secrets, City of Ships, and City of Swords.\n\nMary Hoffman was originally inspired to write the Stravaganza series after a family trip to Venice and an incident involving a gondola ride. The subsequent books developed from the original idea. The country of Talia reflects Hoffman's own imagining of what Italy is like. Further inspiration of the settings for each of the books came from Hoffman's regular trips to Italy.\n\nThough the series was intended to be a trilogy, it was later expanded into six books. For continuing the series, the fourth book in the series, City of Secrets, derived from the theme of secrets and knowledge while continuing on the open-ended plot at the end of City of Flowers, where Luciano Crinamorte is due to attend university in the city of Padavia. Each book in the series introduces a new protagonist as a Stravagante, a traveler between England and the parallel world of Talia, while maintaining previously introduced characters as part of the supporting cast.\n\nThe Stravaganza series is primarily set in Talia, which is based on Italy during the Renaissance in the 16th century. Most notably, the primary antagonists in the series, the di Chimici family, were inspired by the de Medici family. In the series, it is established that a number of differences exist between Talia and Italy in the 16th century in both historical, religious, and scientific ways.\n\nThe existence of Talia parallels the contemporary 21st century world of England, which serves as a secondary setting and the origin of the protagonists of each book in the series. Individuals capable of moving between worlds are known as Stravaganti; a Stravagante's ability to move between worlds is facilitated by a talisman, an object that originally came from the world opposite to the traveler's own world.\n\nThe country of Talia comprises twelve city-states, each which have their own equivalents in this world, and appellations referring a unique quality of the city. Half of these city-states are under the political control of members of the di Chimici family, whose strongholds lie in Giglia and in Remora, which is also home to a di Chimici Pope. The few cities which remain outside di Chimici control are either still negotiating political treaties with the di Chimici (Montemurato), or remain independent (Bellezza, Classe, Romula, Padavia, and Cittanuova).", "doc_id": "8e77dfb8-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Pepsi", "document": "Pepsi is a carbonated soft drink manufactured by PepsiCo. Originally created and developed in 1893 by Caleb Bradham and introduced as Brad's Drink, it was renamed as Pepsi-Cola in 1898, and then shortened to Pepsi in 1961. Pepsi was first introduced as \"Brad's Drink\"[1] in New Bern, North Carolina in 1893 by Caleb Bradham, who made it at his drugstore where the drink was sold.\n\nIt was renamed Pepsi-Cola in 1898, \"Pepsi\" because it was advertised to relieve dyspepsia (indigestion) and \"Cola\" referring to the cola flavor. Some have also suggested that \"Pepsi\" may have been a reference to the drink aiding digestion like the digestive enzyme pepsin, but pepsin itself was never used as an ingredient to Pepsi-Cola.\n\nThe original recipe also included sugar and vanilla. Bradham sought to create a fountain drink that was appealing and would aid in digestion and boost energy.\n\nThe original stylized Pepsi-Cola wordmark used from 1898 until 1905. In 1903, Bradham moved the bottling of Pepsi from his drugstore to a rented warehouse. That year, Bradham sold 7,968 gallons of syrup. The next year, Pepsi was sold in six-ounce bottles, and sales increased to 19,848 gallons. In 1909, automobile race pioneer Barney Oldfield was the first celebrity to endorse Pepsi, describing it as \"A bully drink...refreshing, invigorating, a fine bracer before a race.\" The advertising theme \"Delicious and Healthful\" was then used over the next two decades.\n\nIn 1923, the Pepsi-Cola Company entered bankruptcy\u2014in large part due to financial losses incurred by speculating on the wildly fluctuating sugar prices as a result of World War I. Assets were sold and Roy C. Megargel bought the Pepsi trademark. Megargel was unsuccessful in efforts to find funding to revive the brand and soon Pepsi-Cola's assets were purchased by Charles Guth, the president of Loft, Inc. Loft was a candy manufacturer with retail stores that contained soda fountains. He sought to replace Coca-Cola at his stores' fountains after The Coca-Cola Company refused to give him additional discounts on syrup. Guth then had Loft's chemists reformulate the Pepsi-Cola syrup formula.\n\nOn three occasions between 1922 and 1933, the Coca-Cola Company was offered the opportunity to purchase the Pepsi-Cola Company, which it declined on each occasion.\n\nDuring the Great Depression, Pepsi gained popularity following the introduction in 1934 of a 12-ounce bottle. Prior to that, Pepsi and Coca-Cola sold their drinks in 6.5-ounce servings for about $0.05 a bottle. With a radio advertising campaign featuring the popular jingle \"Nickel, Nickel\" \u2013 first recorded by the Tune Twisters in 1940 \u2013 Pepsi encouraged price-conscious consumers to double the volume their nickels could purchase. The jingle is arranged in a way that loops, creating a never-ending tune: \"Pepsi-Cola hits the spot / Twelve full ounces, that's a lot / Twice as much for a nickel, too / Pepsi-Cola is the drink for you.\" Coming at a time of economic crisis, the campaign succeeded in boosting Pepsi's status. From 1936 to 1938, Pepsi-Cola's profits doubled.\n\nThe stylized Pepsi-Cola wordmark used from 1940 to 1950. It was reintroduced in 2014.\nPepsi's success under Guth came while the Loft Candy business was faltering. Since he had initially used Loft's finances and facilities to establish the new Pepsi success, the near-bankrupt Loft Company sued Guth for possession of the Pepsi-Cola company. A long legal battle, Guth v. Loft, then ensued, with the case reaching the Delaware Supreme Court and ultimately ending in a loss for Guth.\n\nPepsi has official sponsorship deals with the National Football League, National Hockey League, and National Basketball Association. In 2007, and from 2013 to 2022, Pepsi sponsored the NFL's Super Bowl halftime shows. It was the sponsor of Major League Soccer until December 2015 and Major League Baseball until April 2017, both leagues signing deals with Coca-Cola. From 1999 to 2020, Pepsi also had the naming rights to the Pepsi Center, an indoor sports and entertainment facility in Denver, Colorado, until the venue's new naming rights were announced on October 22, 2020. In 1997, after his sponsorship with Coca-Cola ended, retired NASCAR Sprint Cup Series driver turned Fox NASCAR announcer Jeff Gordon signed a long-term contract with Pepsi, and he drives with the Pepsi logos on his car with various paint schemes for about 2 races each year, usually a darker paint scheme during nighttime races. Pepsi has remained as one of his sponsors ever since. Pepsi has also sponsored the NFL Rookie of the Year award since 2002.\n\nPepsi has the first global sponsorship deals with the UEFA Champions League and the UEFA Women's Champions League starting in the 2015\u201316 season along with the sister brand, Pepsi Max and became the global sponsor of the competition.\n\nPepsi also has sponsorship deals in international cricket teams. The Pakistani national cricket team is one of the teams that the brand sponsors. The team wears the Pepsi logo on the front of their test and ODI test match clothing.", "doc_id": "8e77e0da-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Nankai_University", "document": "Nankai University is a national public research university located in Tianjin, China. It is a prestigious Chinese state Class A Double First Class University approved by the central government of China, and a member of the former project 985 and project 211 group of universities. It was founded in 1919, by educators Yan Xiu and Zhang Boling.\n\nDuring the Sino-Japanese War (1937\u20131945), Nankai University, Peking University and Tsinghua University merged and formed the National Changsha Provisional University, which later moved to Kunming and was renamed the National Southwestern Associated University. On 25 December 2000, the State Ministry of Education signed an agreement with Tianjin Municipal Government to jointly establish and develop Nankai University. Since then, Nankai has been listed among the universities to receive priority development investments from the Chinese government in the twenty-first century.\n\nNankai has long been recognized as one of the most prestigious universities in China, constantly ranked among various top 10 lists of Chinese Universities. As a comprehensive university with a wide range of disciplines, Nankai features a balance between the humanities and the sciences, a solid foundation and a combination of application and creativity. The university has 26 academic colleges, together with the Graduate School, the School for Continuing Education, the Advanced Vocational School, the Modern Distance Education School, and categories covering literature, history, philosophy, economics, management, law, science, engineering, agriculture, medicine, teaching and art. The university is especially well known for its economics, history, chemistry, and mathematics research and study.\n\nThe university has academic programs that cover the humanities, natural sciences, technology, life sciences, medical sciences and the arts, with an equal balance between the sciences and the liberal arts.\n\nNankai's academic programs operate on a semester calendar, with two terms. The first beginning in early September and ending in early January and the second beginning in early February and ending in early July. Subsequently, winter break usually ranges from early January to early February and summer break from early July to early September.\n\nNankai, in 2013, had 22 colleges and schools, and offered 79 bachelor's degree programs, 231 master's degree programs, and 172 PhD programs. The total enrollment stood at approximately 12,000 undergraduate students and 11,000 graduate students. Of the total student population, 10% were international students from different countries around the world.\n\nIn 2018 the university offered a total of 80 undergraduate programs, 231 Master and 172 PhD programs and 28 post doctoral research stations. The total count of full-time students was 24,525. Academic staff consisted of 1,986 full-time teaching personnel, with 214 professors and 805 associate professors.\n\nNankai offers different scholarships, among them the Chinese Government Scholarship for international students, the CSC Scholarship for American Students, the Tianjin Government Scholarship, the Confucius Institute Scholarship[44] and the Nankai University Scholarship.\n\nNankai also offers several scholarship programs to support international exchanges and hosts different international student exchange programs. Nankai's broad international programs are organized through the International Office. The university has established cooperations with more than 300 international universities and academic institutions, including programs like an Elementary School Chinese Program with schools in the US, which was launched in 2009. In 2010 the US-China exchange program Study International was launched, with the plan to send 100,000 American students to china in a four-year time frame. The first students were send to Nankai university in the fall of 2010.\n\nIn 2012 Nankai was invited to become the 35th member of The Global University Leaders Forum (GULF), a global community of high-ranking universities, including renowned members like Yale University, Harvard University and the University of Oxford.\n\nThe University has established broad international exchanges and collaborative relationships with more than 200 universities and academic institutions. Nobel Laureates Chen Ning Yang, Tsung-Dao Lee, Samuel Chao Chung Ting, Robert A. Mundell, and Reinhard Selten as well as former President of Korea Kim Dae Jung and former US Secretary of State Henry Kissinger were all conferred Honorary Professorships by Nankai University.\n\nIn November 2016 Robert F. Engle, who won the 2003 Nobel Prize in economics, became an honorary professor at Nankai.\n\nMany other world-known scholars and entrepreneurs have been invited as Visiting Professors at Nankai University. Dr. Heng-Kang Sang returned from the United States to found the College of Economic and Social Development in 1987. Nankai University, a magnet for talented mathematicians known both at home and abroad, has become one of the most famous centers for mathematics worldwide.\n\nNankai established nine Confucius Institutes around the world. Scientists from Nankai have been involved in different scientific breakthroughs and important advancements.\n\nNankai University is ranked among the top 10 universities in China, with exceptions in 2017 and 2018. In the ranking of top 50 universities in China published by Renmin University of China in June 2011, it was ranked 10th. In the Netbig ranking of 2011 it was ranked 10th as well. In the QS World University Rankings of 2013 it was ranked 62nd among Asian universities, and 11th in China. In the Chinese first-class university ranking of 2012 by Wu Shulian of China Management Academy, it was placed 8th. In the CWTS Leiden Ranking 2013, it was ranked 53rd among world universities, and 1st in China. In the Nature Index Global 2014, it was ranked 83rd among world universities, and 7th in China.", "doc_id": "8e77e1fc-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Mukbang", "document": "A mukbang or meokbang, also known as an eating show, is an online audiovisual broadcast in which a host consumes various quantities of food while interacting with the audience. It became popular in South Korea in 2010, and has since become a major spreader of Hallyu, along with K-Beauty, K-pop, and Korean drama, earning its status as a global trend. Varieties of foods ranging from pizza to noodles are consumed in front of a camera. The purpose of mukbang is also sometimes educational, introducing viewers to regional specialities or gourmet spots.\n\nA mukbang is usually prerecorded or streamed live through a webcast on streaming platforms such as AfreecaTV, YouTube, TikTok, and Twitch. In the live version, the mukbang host chats with the audience while the audience type in real-time in the live chat room, creating a multimodal communication. Eating shows are expanding their influence on internet broadcasting platforms and serve as a virtual community and a venue for active communication among internet users.\n\nFamous mukbangers in Asia and North America have gained popularity on social media and made mukbang a career with high income. By cooking and consuming food on camera for a large audience, mukbangers generate income from advertising, sponsorships, endorsements, as well as viewers' support. However, there has been growing criticism of mukbang's promotion of unhealthy eating habits, animal cruelty, and food waste.\n\nMukbang emerged from a solo-eating population in South Korea, that found entertainment in watching actors and actresses eating in TV shows and movies. The contrast to the traditional eating culture that revolves around eating from the same communal dishes at the family dinner table has been acknowledged.\n\nKim-Hae Jin, a doctoral candidate from Chosun University, argued that one can vicariously satisfy the desire for food by viewing. In Korea, individuals who stream mukbang are called broadcast jockeys (BJs). According to Hanwool Choe, a PostDoctoral fellow at the University of Hong Kong, the high level of interaction BJ-to-viewer and viewer-to-viewer contributes to the sociability aspect of producing and consuming mukbang content. Her study analyzed BJ Changhyun's interactions with his audience via live chat and one instance where he temporarily paused to follow a fan's directions on what to eat next and how to eat it. Viewers may influence the direction of the stream but the BJ retains control over what he or she eats. Ventriloquism, by which BJs mime the actions of their fans by directing food to the camera in a feeding motion and eating in their stead, is another technique that creates the illusion of a shared experience in one room.\n\nA study conducted by Seoul National University found that within a two-year time frame (April 2017 to April 2019) the term \"mukbang\" was searched for over 100,000 videos from YouTube. It reported that alleviating the feelings of loneliness associated with eating alone may be the primary reason for mukbang's popularity. In a pilot study from February 2022 on mukbang-watching and mental health, psychologists lay the foundation for future investigation into the potential detriments of using mukbang, or virtual eating, as a substitute for social experiences. Another reason for mukbang viewing could be its potential sexual use. Researchers have argued that mukbangs can be viewed to satisfy fetishes regarding women eating, further emphasizing why many mukbang hosts are traditionally attractive women. Other studies argue that individuals who watch mukbang do so for entertainment, as an escape from reality, or to get satisfaction from the ASMR aspects of mukbang such as the eating sounds and sensations. Mukbang has also been described as a multi-sensorial experience and compared to a similar carnal video type, pornography. Researchers liken the reduced satisfaction of eating from fervid viewership of mukbang to the diminished satisfaction of sex from overconsumption of pornography.\n\nA popular sub-genre of the trend is \"cook-bang\" show, in which the streamer includes the preparation and cooking of the dishes featured as part of the show.South Korean video game players have sometimes broadcast mukbang as breaks during their overall streams. The popularity of this practice among local users led the video game streaming service Twitch to begin trialing a dedicated \"social eating\" category in July 2016; a representative of the service stated that this category is not necessarily specific to mukbang, but would leave the concept open to interpretation by streamers within its guidelines.", "doc_id": "8e77e2ce-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Ukulele_Orchestra_of_Great_Britain", "document": "The Ukulele Orchestra of Great Britain (UOGB) is a British musical ensemble founded in 1985 by George Hinchliffe and Kitty Lux as bit of fun. The orchestra features ukuleles of various sizes and registers from soprano to bass. The UOGB is best known for performing musically faithful but often tongue-in-cheek covers of popular songs and musical pieces from a wide variety of music genres taken \"from the rich pageant of western music\". The songs are often performed with a reinterpretation, sometimes with a complete genre twist, or well known songs from multiple genres are seamlessly woven together. Songs are introduced with light hearted deadpan humour, and juxtaposition is a feature of their act, the members of the orchestra wear semi-formal (black tie) evening dress and sit behind music stands, in a parody of a classical ensemble.\n\nThe UOGB has purposely remained an independent music group, unsigned to any record label. Along with Lux and Hinchliffe, David Suich and Ritchie Williams are original members; Hester Goodman, Will Grove-White, Jonty Bankes, Peter Brooke Turner joined in the early 1990s, Leisa Rea joined in 2003, Ben Rouse in 2014 and Laura Currie in 2021. Lux died in 2017, two years after retiring from the orchestra due to chronic ill health. Over the years the UOGB has released over 30 albums, but have spent most of their time touring around the world.\n\nThe UOGB has consistently received critical praise from the media for its concerts. The Ukulele Orchestra of Great Britain has been called \"not only a national institution, but also a world-wide phenomenon\". The UOGB has also often been credited for being largely responsible for the current world-wide resurgence in popularity of the ukulele and ukulele groups.\n\nThe Ukulele Orchestra of Great Britain (UOGB) was formed in London in 1985 when the multi-instrumentalist and musicologist George Hinchliffe gave his friend the post-punk singer Kitty Lux a ukulele for her birthday, after she had expressed an interest in learning more about harmony. After first playing together, they purchased a few ukuleles for some of their friends, including David Suich and Richie Williams. Williams recalled that his first ukulele cost \"\u00a317 with wholesale discount\". Hinchliffe named the new musical group with a deliberate oxymoron, 'The Ukulele Orchestra of Great Britain', \"and suddenly we were the world's first ukulele orchestra.\" The ukulele was selected for its musical versatility rather than its novelty value. Hinchliffe informed The Chicago Tribune that the original idea included turning a derided instrument which lacked a serious repertoire of its own into a respected concert instrument. It was an \"outsider instrument\" with a \"blank slate\" that was not limited by the conventions of either classical or rock music.\n\nHinchliffe informed the Houston Chronicle that the post punk idea was for the orchestra to be an \"antidote to pomposity, egomania, cults of personality, rip-offs, music-business-standard-operational nonsense and prima donnas,\" the orchestra members had previously worked in various music genres but were tired by the conventions, genre stereotyping and pretentiousness within the music industry. UOGB has remained an independent music act which has deliberately not signed to a record label. Hinchliffe stated to the Yorkshire Post the idea of the UOGB was to have bit of fun \"where we're not having the agents and the managers and the record companies dictating terms.\"\n\nThe orchestra has appeared on a wide range of television and radio programs both in the UK and internationally. The UOGB has collaborated with David Bowie, Madness, Robbie Williams, Yusuf Islam (Cat Stevens), the Kaiser Chiefs, the Ministry of Sound, and the film music composer David Arnold. While the orchestra sell its albums directly from their official web site, most of their income is derived from touring with 110 concerts a year according to the New Zealand Stuff news website (while the British Council stated in 2014 that over the previous 29 years the UOGB had performed a higher figure of 9,000 concerts).\n\nDuring the 2020-21 COVID-19 pandemic, Orchestra's members unable to tour due to the lockdowns and separated in their various homes released 13 music videos as a group on YouTube, called the Ukulele Lockdown series (these were collected together and released as the virtual opening concert for the 2021 San Francisco Performances PIVOT Festival), plus a series ukulele video tutorials and other ukulele videos, followed by five original 'The Ukulele World Service' online pay to view concerts. In 2021, Laura Currie became a full time member of the Ukulele Orchestra of Great Britain. She had previously been a stand in since 2019 and had toured with the orchestra, she edited the lockdown videos.\n\nThe Ukulele Orchestra of Great Britain has been described by the Daily Telegraph, Guardian and others as a \"much-loved\" British institution\" that has become a \"worldwide phenomenon\" with an \"international cult status\". The orchestra has received positive reviews of its concerts from critics. Manchester Evening News said of the orchestra that it had \"a beautiful chemistry that represents fun, innocence, daftness and a genuinely enjoyable showcase of unique talent.\" The Kansas City Star considered the orchestra had \"taken the comic aspects and musical capabilities of the ukulele and blended them together into a well-honed act, delivered with marvellous nonchalance and impressive versatility.\" The Financial Times Laura Battle applauded the orchestra members\u2019 \"consummate skill\" and said that the \"sophisticated sound they make both percussive and melodic is at once hilarious and heartfelt.\"\n\nBBC Radio 4 and the Canadian Now described the Ukulele Orchestra as a union of skilful musicianship with a subversive post punk delivery and The Press (York) added that they used the limitations of the ukulele \"to create a musical freedom that reveals unsuspected musical insights\". Classic FM described the UOGB's rendition Ennio Morricone's The Good, The Bad And The Ugly as both \"sprightly\" and a \"delightfully delicate\" that remained true to the epic composition of the original work, while the Australian Stage.com called the UOGB's cover \"jaw-dropping\".\n\nIn 2009, Erwin Clausen, a German producer, approached the UOGB with a request to set up a franchise version of the band in Germany. The UOGB denied his request, however Clausen assembled the United Kingdom Ukulele Orchestra (UKUO), which performed in a very similar style as that UOGB. Based in Germany, the UKUO just like the UOGB consisted of eight British musicians (six men and two women) who wore semi-formal evening dress sitting in a line behind music stands performing a similar range of cover versions of popular music and similar comedy. Judge Richard Hacon, sitting at the Intellectual Property Enterprise Court initially declined to issue an injunction to stop UKUO touring England in 2014 as proceedings had been issued too late. Ultimately, the Court found that the German-based ukulele troop was causing confusion and so the claim of passing off succeeded. The Judge ruled Clausen had \"acted outside honest practices\" when he set up the UKUO, and evidence showed that confusion between the two orchestras names did confuse the public \"who recognise The Ukulele Orchestra of Great Britain as the trade name of a particular musical act, that the two orchestras UOGB and UKUO are either the same group, or otherwise commercially connected.\" The Court found that this had caused damage to the Ukulele Orchestra of Great Britain's goodwill, especially by way of the UOGB's loss of control over their reputation as artists. However, though similarities in the name amounted to passing off, the judge ruled that Clausen and the UKUO were not guilty of copyright or trademark infringement as far as the style of the performance.", "doc_id": "8e77e472-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Homelessness_in_California", "document": "Homelessness in California is considered a major social issue. In 2017, California had 22% of the nation's homeless in a state whose residents make up only 12% of the country's total population. The California State Auditor found in their April 2018 report Homelessness in California, that the U.S. Department of Housing and Urban Development noted that \"California had about 134,000 homeless individuals, which represented about 24 percent of the total homeless population in the nation.\" The California State Auditor is an independent government agency responsible for analyzing California economic activities and then issuing reports.\n\nThe Sacramento Bee notes that large cities like Los Angeles and San Francisco both attribute their increases in homeless to the housing shortage. In 2017, homeless persons in California numbered 135,000 (a 15% increase from 2015). This number and figure increased in a January 2021 according to a report by an estimate done by the United States Department of Housing and Urban Development, which found that over 161,000 homeless people in California were counted.\n\nA 2022 study found that differences in per capita homelessness rates across the United States are not due to mental illness, drug addiction, or poverty, but to differences in the cost of housing, with West Coast cities including San Francisco, Los Angeles and San Diego having homelessness rates five times that of areas with much lower housing costs like Arkansas, West Virginia, and Detroit, even though the latter locations have high burdens of opioid addiction and poverty.\n\nIn a new book titled \u201cHomelessness is a Housing Problem,\u201d Clayton Page Aldern (a policy analyst and data scientist in Seattle) and Gregg Colburn (an assistant professor of real estate at the University of Washington\u2019s College of Built Environments) studied per capita homelessness rates across the country along with what possible factors might be influencing the rates and found that high rates of homelessness are caused by shortages of affordable housing, not by mental illness, drug addiction, or poverty.\n\nThey found that mental illness, drug addiction and poverty occur nationwide, but not all places have equally expensive housing costs.\u200a One example cited is that two states with high rates of opioid addiction, Arkansas and West Virginia, both have low per capita rates of homelessness, because of low housing prices.\u200a With respect to poverty, the city of Detroit is one of the poorest cities, yet Detroit's per capita homelessness rate is 20% that of West Coast cities like Seattle, Portland, San Francisco, Los Angeles, and San Diego.\n\nThe United States Interagency Council on Homelessness estimated that there were over 129 thousand homeless people on any given day in California in 2018. As of 2020, it is around 160,000 people. This is less than 0.5% of the total population, but far more than any other state in the union. Factors that contribute to homelessness are mental health, addiction, tragic life occurrences, as well as poverty, job loss and affordable housing. According to the National Low Income Housing Coalition (2018), there is no state that has an adequate supply of affordable housing. California has been identified as having only 22 homes for every 100 (nlihc.org) of the lowest income renters, putting the housing shortage in California at over 1 million homes. While programs to help the homeless do exist at the city, county, state and federal levels, these programs have not ended homelessness.\n\nFormer state Assemblyman Mike Gatto proposed in a 2018 opinion piece that a new form of detention be created as a method to force drug addicted and mentally ill homeless persons (which make up two-thirds of California's homeless population) off the streets and into treatment, as well as to lengthen the jail terms for misdemeanors.\n\nAs the number of homeless people increased, the problem emerged as a major issue during the governor's race in 2018. Shortage in affordable housing contributes to the increasing numbers of homelessness as well as assisted and support programs to help this population maintain a course of action towards improvement. CALmatters, 2018, addresses three stages of homelessness as: \u201cChronic, Transitional, and Episodic.\u201d\n\nIn 2019, California Governor Jerry Brown passed Senate Bill 1152, declaring that hospitals must come up with a discharge plan for homeless patients before discharging and ensure that they have food, shelter, medicine, and clothes for their posthospital care. This bill addressed the problem of homeless people not being able to heal properly or call back once discharged from the hospital. While many of the homeless are eligible for free health insurance from Medi-Cal, there is confusion surrounding how to apply and many other difficulties the homeless must address first, causing many homeless to not have health insurance.\n\nProject Roomkey is a homeless relief program designed to mitigate the spread of the COVID-19 virus among the homeless population. It began in March 2020, with funding largely coming from FEMA. The program was slated to end in late 2020, but was continued with state and local funding. The program housed the homeless in vacant motel or hotel rooms, particularly those aged 65 or older or who had an underlying medical condition. \n\nProject Homekey is a homeless relief program established as a continuation of Roomkey. Phase one of the initiative received $600 million in funding combined from the United States federal government's Coronavirus Aid Relief Fund (CARES Act) and California's general fund, and ended in December 2020. Homekey focuses on the creation of low-cost housing by repurposing hotels, motels, vacant apartments, and other buildings.\n\nOn July 19, 2021, California Governor Gavin Newsom signed a $12 billion bill to \"fight\" homelessness. Of the total, $150 million would be set aside to continue Project Roomkey, and $5.8 billion would go to building new housing units for phase two of Project Homekey.", "doc_id": "8e77e5a8-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/George_Karl", "document": "George Matthew Karl (born May 12, 1951) is an American former professional basketball coach and player. After spending five years as a player for the San Antonio Spurs, Karl became an assistant with the team before getting the chance to become a head coach in 1981 with the Continental Basketball Association. Three years later, he became one of the youngest NBA head coaches in history when he was named coach of the Cleveland Cavaliers at age 33. By the time his coaching career came to an end in 2016, Karl would coach for nine different teams in three different leagues (CBA, NBA, Liga ACB), which included being named Coach of the Year three combined times (twice in the CBA and once in the NBA) with one championship in the FIBA Saporta Cup. He is one of nine coaches in NBA history to have won 1,000 NBA games (which included twelve seasons with fifty or more wins) and was named NBA Coach of the Year for the 2012-13 season. While he never won an NBA championship, Karl made the postseason 22 times with five different teams, which included a trip to the 1996 NBA Finals with the Seattle SuperSonics.\n\nAfter his playing career, Karl spent two years with the Spurs coaching staff as an assistant coach. He was then named head coach of the Montana Golden Nuggets of the Continental Basketball Association. Karl guided the team to the CBA Finals in 1981 and 1983, winning Coach of the Year both seasons. Despite the success on the court, the franchise folded in 1983.\n\nIn 1983, Karl returned to the NBA with the Cleveland Cavaliers as director of player acquisition. Head coach Tom Nissalke was fired after the season in May 1984, and at age 33, Karl was promoted to head coach in late July. In his first season, the Cavaliers made the playoffs for the first time in six seasons. The success did not carry over to the next season, and Karl was dismissed by the Cavaliers in mid-March after a disappointing 25\u201342 start; Cleveland finished 4\u201311 under assistant Gene Littles to end up at 29\u201353. For the next two months, he was a scout and adviser to the Milwaukee Bucks.\n\nIn late May 1986, Karl was named head coach of the Golden State Warriors; he took them from a record of 30\u201352 the year before, to the playoffs for the first time in ten years. In the first round, they faced the Utah Jazz in a best\u2013of\u2013five series. Each team won two close games at home setting up a decisive fifth game in Utah that the Warriors won to advance to the playoff semifinals.\n\nMatched up in the semifinals against the Los Angeles Lakers, who had won three championships in the past seven seasons, Karl's team was expected to be swept by the much more experienced Lakers, and promptly lost the first three games. Facing elimination in game 4, the Warriors overcame a twelve\u2013point fourth quarter deficit and won 129\u2013121 thanks to Sleepy Floyd\u2019s 51-point game. Game 4 was the only game the Lakers lost in the Western Conference playoffs that year, en route to the first of their back\u2013to\u2013back championships.\n\nDuring the 1987\u201388 season, the Warriors got off to a rough start, and team management decided to trade Purvis Short, Sleepy Floyd and Joe Barry Carroll in order to save money and get younger. With Chris Mullin going through alcohol rehabilitation, Karl was now without his top four scorers from the 1987 playoff team. Frustrated with the team's direction, he resigned from the Warriors with 18 games left in the season. Though he resigned, there has been speculation Karl was actually fired, as he signed a non-disclosure agreement and received a buyout of his contract.\n\nOn September 5, 1988, Karl was named head coach of the Albany Patroons of the CBA, leading them to a 36\u201318 record. In 1989, Karl coached Real Madrid of Liga ACB. Madrid finished 69\u201317, though they dealt with the death of their best player, Fernando Mart\u00edn Espina. Real Madrid came third in the Spanish league, were Spanish cup semifinalists, and lost the final of the Saporta Cup, Europe's second most important cup competition.\n\nKarl returned to coach the Patroons in 1990, leading them to a 50\u20136 season, while winning all 28 home games. For his efforts, Karl was named CBA Coach of the Year for the third time. Karl then returned to Real Madrid for the 1991\u201392 season, until he left to return to the NBA. Real Madrid won the Saporta Cup, came second in the Spanish league, and lost in the quarterfinals of the Spanish cup.\n\nOn January 23, 1992, Karl was named head coach of the Seattle SuperSonics, replacing K.C. Jones. Karl led a late season surge going 27\u201315, and entering the playoffs as the sixth seed. In the first round, they upset his former team, the Golden State Warriors in four games, but lost in the second round to the Utah Jazz.\n\nIn his second (and first full) season as the SuperSonics coach in 1992\u201393, the team improved their 47\u201335 record to 55\u201327, and qualified for the playoffs as the third seed in the Western Conference. They defeated the Utah Jazz 3\u20132 in the first round and the Houston Rockets 4\u20133 in the semifinals. Seattle lost in the Western Conference Finals to the Charles Barkley\u2013led Phoenix Suns in a full seven-game series, falling just one game short of the NBA Finals.\n\nThe following season, Seattle won 63 games and its first Pacific Division title since their 1979 championship season. Despite a rift with mid-season acquisition Kendall Gill, Karl led the Sonics to the top seed in the Western Conference. Playing the eighth\u2013seeded Denver Nuggets in the opening round of the playoffs, Seattle won their first two games at home, but lost the following three, including the closing game at home, to become the first top seed to lose to an eighth-seed in the playoffs history.", "doc_id": "8e77e706-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Secret_(South_Korean_group)", "document": "Secret was a South Korean K-pop girl group formed by TS Entertainment in 2009. The group originally debuted with four members: Jun Hyo-seong, Jung Ha-na, Song Ji-eun and Han Sun-hwa. They released their debut single I Want You Back October 2009. Secret's debut single did not meet great success and it was not until the following year that the group saw a rise in popularity. In 2010, Secret released two singles Magic and Madonna which earned much attention with both singles peaking at No. 2 and No. 1 respectively on the Gaon Digital Chart. With the success of \"Magic\" and \"Madonna\", the group received the \"Newcomer Award\" at the 25th Golden Disk Awards.\n\nIn 2011, Secret adopted a girl-next-door image through songs like Shy Boy and Starlight Moonlight which led the group to major success. With the hit single Shy Boy Secret won their first music show award on M Countdown; they also managed to stay at number one on Music Bank for three consecutive weeks, earning them a triple crown. Shy Boy and Starlight Moonlight won Secret multiple awards including two Song of the Year awards at the 1st Gaon Chart Awards for the months of January and June. Secret released their first full-length album Moving in Secret in October 2011, featuring the lead single \"Love is Move\" which showcased Secret's sexy and confident side again. With the group's success in South Korea throughout 2011, Secret sold over seven million in digital download sales.\n\nIn August of the same year, Secret made their Japanese debut releasing their first single, \"Madonna\", a remake of their Korean hit single, which debuted at number nine on the Oricon charts. In November 2011, \"Shy Boy\" was remade to serve as the lead single on their first Japanese mini album Shy Boy which also featured a Christmas remake of \"Starlight Moonlight\" titled \"Christmas Magic\". Throughout 2012, Secret heavily promoted in Japan releasing two Japanese singles, So Much For Goodbye and Twinkle Twinkle, prior to releasing their first full-length Japanese album, Welcome to Secret Time. Twinkle Twinkle was used as the ending theme song of the Naruto spin-off, \"Naruto SD: Rock Lee and his Ninja Pals\" which aired on TV Tokyo.\n\nAfter almost a year of absence from the South Korean music industry Secret released their third extended play \"Poison\" in September 2012 followed by the digital single Talk That in December. The following year Secret released their fourth extended play Letter from Secret April 2013 and their third single album, Gift From Secret December 2013. In August 2014, Secret released their fifth extended play Secret Summer.\n\nRetro is the main musical style of the majority of Secret's singles, although the group has channeled other genres such as pop, dance, R&B, and hip-hop. Secret were originally formed with an intention to be an RnB and Hip-Hop group as seen in their debut single, \"I Want You Back\". As Seoulbeats wrote, \"Originally debuting with an urban RnB concept, the girls of Secret have transformed themselves into the queens of retro ever since their breakthrough hits with \u201cMagic\u201d and \u201cMadonna\u201d. Furthermore, with Wonder Girls recently vacating their long-held affair with retro in favor of a more fresh and futuristic approach, Secret lays claim as the next best group to have established a retro identity.\" Catherine Deen of Yahoo! Philippines said that the group is known \"for its unique ability to take retro music and make it their own.\" While reviewing \"Love is Move\", Park Hyunmin of enewsWorld commented that Secret is \"known for its pop-heavy beats and easy-to-follow dance moves\". Hyunmin further added that \"Secret's main appeal is its retro beat and sexy choreography\".\n\nSecret's output, particularly with their work with Kang Ji-won and Kim Ki-bum, makes use of live instruments such as brass, saxophones and drums with the incorporation of synthesizers and electric guitars. While reviewing \"Poison\", Seoulbeats commented that \"the composition of \"Poison\" contains a very fitting saxophone hook along with recognizable brass and drum instrumentals that have become the trademark of Secret and their in-house composers, Kang Ji-won and Kim Ki-bum, who have produced all of Secret's retro hits. Secret's secret formula for success lies not only in their determined execution of one particular style, but their consistent development of a concept that suits them very well.\" Seoulbeats concludes that \"Secret has carved out its conceptual niche in the K-pop market by developing a distinct and consistent style, and thus they are now in a position to elevate their success and recognition in the industry. While most other groups undergo drastic changes over time to keep current fans interested and to attract new fans, Secret clearly benefits from the advantages of continuity. What other groups or idols have carved out a stylistic niche over the years?\"\n\nSecret is also known for their transitions to cute and the girl-next-door image through songs like \"Shy Boy\" and \"Starlight Moonlight\" from the sexy and powerful image through \"Magic\", \"Madonna\" and \"Poison\" while still retaining retro as the main theme for their sound. Although a commercial success with \"Magic\" and \"Madonna\" peaking at number two and number one respectively on the Gaon Charts, Secret failed to win a first place award on any televised South Korean weekly music shows such as M! Countdown, Music Bank and Inkigayo until \"Shy Boy\". Seoulbeats wrote, \"Secret embarked on an aegyo phase in their following release of \u201cShy Boy\" although with much voiced displeasure from the members. Despite the radical change in their personalities, their style remained consistent in staying with and expanding upon their retro identity. The song became a major success as Secret traded in their sex appeal for a heavy dose of aegyo, resulting in much hardware for their chart-topping song.\" Secret kept its cute imagery with their following single \"Starlight Moonlight\" until the release of \"Love is Move\" and \"Poison\". During their promotions with \"Poison\", Jun Hyoseong commented \"Secret has always been known as a cute group that appeals to the general public. This time, however, we\u2032ve escaped that and showed off our unique charms. Our visuals have also changed for the sexy.\" Jun added, \"We\u2032re actually closer to Shy Boy in real life.\"\n\nIn 2010, Secret released their hit single \"Magic\" which was nominated at the 12th Mnet Asian Music Awards for Best Dance Performance by a Female Group. The same year, the group released their number one hit single \"Madonna\" which won them a Bonsang award at the 20th Seoul Music Awards. With the success of \"Magic\" and \"Madonna\", the group received the \"Newcomer award\" at the 25th Golden Disk Awards.\n\n\"Shy Boy\" earned the group's first win in Mnet's M! Countdown and SBS's Inkigayo. The song was their first Triple Crown in KBS's Music Bank and garnered them multiple awards and nominations. The song was nominated for Song of the Year and Best Dance Performance by a Female Group at the 13th Mnet Asian Music Awards. The song won a Bonsang award at the 21st Seoul Music Awards and at the 3rd Melon Music Awards. The song also won the Songs of the Year Award at the 1st Gaon Chart Awards for the month of January. Following the success of \"Shy Boy\", Secret released Starlight Moonlight in June 2011. The song earned them their second win in Inkigayo and won them a Digital Bonsang award at the 26th Golden Disk Award. In 2012, \"Starlight Moonlight\" won the Songs of the Year Award in the 1st Gaon Chart Awards for the month of June.", "doc_id": "8e77e8dc-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/USS_Black_Arrow", "document": "USS Black Arrow (ID-1534) was a troop transport commissioned in 1919 to assist in the post-World War I repatriation of U.S. troops from France. Black Arrow was originally SS Rhaetia, a passenger-cargo ship built in Germany in 1904\u201305 for the Hamburg-America Line. From 1905 to 1914, Rhaetia operated primarily between Hamburg, Germany and South America, though she was also intermittently employed as an immigrant ship to the United States. With the outbreak of World War I in August 1914, Rhaetia was interned at Philadelphia.\n\nWith the entry of the United States into the war in April 1917, Rhaetia and other German ships interned in U.S. ports were seized by the U.S. government for possible use in the war effort. After repairs, the former Rhaetia went into service with the U.S. Army as a general transport under the names USAT Black Hawk and later USAT Black Arrow, making five round trips between the United States and France from June 1917 to the end of the war in November. The ship was then converted into a troop transport in order to assist with the repatriation of U.S. troops from France. Commissioned into the U.S. Navy as USS Black Arrow (ID-1534), the ship subsequently made three round trips to France from April to July 1919, returning a total of 4,759 troops to the United States, before decommissioning in August.\n\nReverting to the name SS Black Arrow following her naval decommission, the vessel was given a refit before being chartered by the United States Shipping Board to the American Line. She then recommenced merchant service as a passenger-cargo ship, inaugurating a new service from New York to Black Sea and Near East ports, and in December 1919 became the first ship to return to the United States from Constantinople since the outbreak of the war. After only one more voyage to the Near East however, the ship was given another refit and chartered to the Ward Line for service between New York and Spain.\n\nIn August 1921, on her fourth voyage to Spain, Black Arrow ran aground off the Spanish coast at Cape Vilan. Refloated, she was returned to New York in November but saw no further service. After being laid up for an extended period, she was scrapped at New Jersey in late 1924.\n\nRhaetia\u2014a steel-hulled, screw-propelled passenger-cargo ship and the sister ship of Rugia\u2014was built in 1904\u201305 by Bremer Vulcan of Vegesack, Germany, for the South American service of the Hamburg-America Line. Her yard number was 476. She was launched 5 November 1904 and completed 5 May 1905.\n\nRhaetia had a length of 408 feet 4 inches (124.46 m), beam of 52 feet 7 inches (16.03 m), hold depth of 28 feet (8.5 m) and draft of about 25 feet (7.6 m). She had a gross register tonnage of 6,600, net register tonnage of 4,141, deadweight tonnage of 7,050 long tons and (as measured in later U.S. Navy service) displacement of 11,900 long tons. She was fitted with accommodation for 100 first-class and 800 third-class (steerage) passengers, which included \"all modern appliances for lighting, heating and refrigeration.\" Her original cargo capacity is not known, but in later American service it was listed as 330,330 cubic feet bale or 356,229 cu grain. The vessel had two masts, a single smokestack; one deck not including the shelter deck; nine waterproof bulkheads, and water ballast tanks with a total capacity of 1,144 tons.\n\nRhaetia was powered by a 3200 ihp four-cylinder quadruple expansion steam engine with cylinders of 24, 35, 51 and 72 inches (61, 89, 130 and 183 cm) by 54-inch (140 cm) stroke, driving a single screw propeller. Steam was supplied by four single-ended, coal-fired Scotch boilers with a working pressure of 215 psi (1,480 kPa). With a coal bunker capacity of 1,590 tons and average coal consumption of 46 tons per day, the ship had a steaming radius of 8,784 nautical miles (16,268 km; 10,108 mi). Rhaetia had a service speed of 13 knots (15 mph; 24 km/h).\n\nIn early January 1922, a few weeks after Black Arrow's return to New York, the vessel was offered for sale by the USSB, \"as is, where is\". Later, she was laid up in the Passaic River, New Jersey, for an extended period.\n\nBy 1924, Black Arrow and several other ex-USSB ships had been acquired by H. L. Crawford & Co. for the purpose of testing a new ship-breaking method which the firm's proprietor, H. L. Crawford, hoped would prove competitive with foreign yards. Crawford founded a new $75,000 firm, the American Ship Breaking Company, and established a shipbreaking plant at Howland Hook, New Jersey. In September 1924, Black Arrow had her machinery removed on Crawford's behalf at the Shupe Terminal Company, Kearny, New Jersey, after which the ship was to be taken to the Howland Hook plant for dismantling of the hull.", "doc_id": "8e77e9fe-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Formula_of_Love:_O%2BT%3D%EF%BC%9C3", "document": "Formula of Love: O+T=<3 is the third Korean studio album (sixth overall) by South Korean girl group Twice. It was released on November 12, 2021, by JYP Entertainment and Republic Records. It follows the release of the group's first English-language single, \"The Feels\".\n\nSelling over 700,000 units during its pre-order period, the album became Twice's best-selling album to date; surpassing a record previously held by More & More (2020). Primarily a disco-pop record, the album encapsulates a handful of genres such as Latin pop, hip hop, R&B, and synthpop. It also debuted at number three on the Billboard 200 with 66,000 album-equivalent units, becoming the group's fourth and highest entry on the chart.\n\nFollowing Eyes Wide Open (2020), Formula of Love: O+T=<3 is Twice's third Korean-language studio album. It is the group's third release in 2021, following their tenth Korean extended play, Taste of Love, and third Japanese album, Perfect World.\n\nFormula of Love: O+T=<3 was first teased by Twice member Chaeyoung on September 13, 2021, in the behind-the-scenes video for her photoshoot with OhBoy! magazine, although at the time, not much was known about it. At the end of the music video of Twice's first English single, \"The Feels\", a full-length album scheduled to be released in November 2021 was teased. The name of the album and its release date were revealed on October 8. A preview showing the four versions of the physical album was posted on October 12. Pre-orders began later that day. On October 29, the album's track listing was announced.\n\nFormula of Love: O+T=<3 is a fifteen-track[c] album that features genres such as city pop, dance-pop, deep house, disco, hip hop, Latin pop, nu-disco, reggaeton, and R&B. Twice members Nayeon, Jihyo, Dahyun, and Chaeyoung took part in writing some songs from the album. In an interview with the Associated Press, Jihyo revealed that the death of a houseplant was her inspiration for writing the song \"Cactus\".\n\nFormula of Love: O+T=<3 opens with its title track, \"Scientist\", a \"funky\" dance-pop song, blending elements of synth-pop and deep house by featuring '80s-inspired synths with \"groovy\" bass lines in its production. Lyrically, it delves into the theme of love and studying the fundamentals of romance, and by using science-related word play, they declare there is no right answer to love. It is followed by two English songs, \"Moonlight\" and \"Icon\"; with the former channeling '80s nostalgia through its \"tropical disco vibes\" and \"cute percussion, claps and marimba leads\", and the latter asserting the \"most swag Twice can offer\".[16] Following these two are songs written by Twice members, \"Cruel\" by Dahyun, \"Real You\" by Jihyo, and \"F.I.L.A. (Fall in Love Again)\" by Nayeon.\n\nOn November 10, 2021, it was reported that the album had gained over 630,000 pre-order sales by November 8, becoming Twice's most pre-ordered and best-selling album of all-time before its release. By November 10, it had reached over 700,000 pre-order sales. In its first week, MRC Data reported that the album had sold 66,000 album-equivalent units in the United States. Of these, 58,000 were pure sales, 8,000 were streaming-equivalent units, and a negligible amount were track-equivalent units. On January 6, 2022, the Korea Music Content Association (KMCA) certified Formula of Love: O+T=<3 2\u00d7 Platinum after it sold more than 500,000 units in South Korea.\n\nFormula of Love: O+T=<3 debuted at number 1 on South Korea's Gaon Album Chart, making it Twice's tenth number-one album on the chart; following More & More (2020). In Japan, it peaked at number 17 on Billboard Japan's Hot Albums chart and at number 2 on Oricon's Albums Chart. The album became Twice's highest-charting album in the US and Canada to date, peaking at numbers 3 and 17 on the Billboard 200 and the Canadian Albums Chart, respectively. In addition to this feat, the album has spent 8 consecutive weeks on the Billboard 200. Moreover, on other Billboard charts, the album peaked at numbers 7, 2, and 1 on Tastemakers, Top Album Sales, and World Albums, respectively. In Europe, the album appeared on Belgium's Ultratop Flanders and Wallonia 200 Albums, Finland's Top 50 Albums, Lithuania's Top 100 Albums, the Netherlands' Album Top 100, and the United Kingdom's Album Downloads Chart.", "doc_id": "8e77eaee-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Shining_Tears_X_Wind", "document": "Shining Tears X Wind is a Japanese anime based on the PlayStation 2 games Shining Tears and Shining Wind. Shining Tears X Wind presents an adapted version of Shining Wind's story, seen from the perspective of the character Souma. It is directed by Hiroshi Watanabe and produced by Studio Deen. The series started airing in Japan on April 6, 2007 and finished airing on June 29, 2007. The anime has the same opening and ending themes as the game Shining Tears. A sequel to this is a mobile game entitled Shining Wind X, which was released in January 2008.\n\nA group of students from St. Luminous College are investigating the mysterious disappearances that are happening all around Tatsumi Town. One student, who was researching a book entitled 'End Earth', which describes an alternate world, also mysteriously disappears. As Mao, a being from the other world, enters their world in order to look for her friend, Zero, she teams up with Souma and Kureha in order to battle a monster who also came to their world. Just when things seem to go their way, an accident teleports Souma and Kureha into another world. To make things more complicated, Zero appears to the two stating that he will now entrust the world to Souma. As they travel this new world, they encounter familiar friends and enemies, and they begin to realize that getting back to their own world is harder than they previously thought.\n\nIn End Earth, a war, surrounding a legendary item called \"The Holy Grail\", begins. Zero later tells Souma that the world will end if Zeroboros, the guardian of time and space, awakens. This guardian will use whoever wins the war as a body.\n\nSouma Akizuki\nAn athletic student who, along with Kureha, was transported to the dream continent, End Earth. At the beginning of the series, Souma has feelings for Kureha, despite knowing that she has feelings for Kiriya; he sometimes gets annoyed at Kiriya for not noticing her feelings for him. He then confesses his affection for her, which apparently is not mutual, shortly before being transported to the other world. After seeing Kureha presumably die, he was able to pull out a legendary sword from Kureha's body. Souma turns out to be a 'Soul Blader', which is a person that can form a sword from the heart of any person he shares similar emotions with. Unlike Kureha, Souma would rather stay in End Earth, as he believes that it is the right place for him and since he thinks that he could be \"together\" with Kureha in End Earth. Later, after Souma and Kureha meet up with Kiriya and Seena, he departs from the group, leaving Kiriya and Seena to take care of Kureha, realizing that she did not feel the same way about him and thus decided to no longer use Kureha's heart as a Soul Blade, since he knew that her heart had feelings for Kiriya. So, he temporarily uses a katana when there are no Soul Blades around for him to use. When Souma and Kureha were between dimensions, they were greeted by Zero, who entrusted Souma in guarding the world from then on. After his second meeting with Zero, he receives one of the Twin Dragon Rings, and begins to see the world in a whole new way. His attitude also shifts from a rash and emotional state, to having a calm and understanding demeanor. Souma becomes a sort of mediator, not taking any sides, and travels with Lazarus, Ryuna, Elwyn, and Blanc Neige, in his mission to protect and save the world. After expressing his desire to join Weissritter, he is nominated as its proxy leader. In the end, after seeing Mao's sad and tearful face, he realizes his true feelings for Mao and her feelings for him. Souma then decides to give up his chance of going back to his own world and decides to stay with Mao and Weissritter.\nOut of the rest of the characters, Souma was able to extract the most Soul Blades; having drawn blades from Kureha, Hiruda, Kiriya, who in fact possesses his Holy Grail, and also from all the members of the current Weissritter.\n\nTouka Kureha\nA student who also happens to work at a shrine as a miko, or shinto priestess. Kureha is skilled in the art of archery, or ky\u016bd\u014d. She was transported to End Earth with Souma after they helped Mao defeat a monster. Upon their arrival, she was attacked by monsters and presumably died. Fortunately, she was revealed to be unscathed after Souma pulled out a sword from her body, the Spirit Sword Snow Moon Flower. After Kiriya and Souma's fight, she joins Kiriya and the other Luminous Knights. Kiriya can also draw a sword from her, the Spirit Sword Blazing Sunlight. Kureha has feelings for Kiriya, but she never expresses them to him. In the end, she returns to her own world, together with Kiriya and Seena.\n\nKaito Kiriya\nA quiet and shy student who receives a message from an elven woman from End Earth. When he tells his friends, the group dismisses the message as they believe it to only be a dream. Kiriya is a skilled swordsman; ranging from kendo to fencing. It is said that no one could beat him if he was to fight seriously. The cherry blossom of their college transports him and Seena to End Earth right after Souma and Kureha, taking the entire building as well. He becomes the knight for an independent mercenary group called the 'Luminous Knights', which is started by Seena. Like Souma, Kiriya is also a Soul Blader, who uses Seena's \"heart\" as a sword. Kiriya is completely oblivious to Kureha's and Seena's feelings for him, the latter being one of his childhood friends. He later forms a close bond with an elf named Xecty, who is later revealed to be an artificial recreation of the Elven Queen. Until Xecty's death, he was confident in his mission to have Saionji admit defeat. Since the start of the war, his attitude shifted from his shy and quiet self, to a rash and emotional state, who sees that fighting is the only way to win (the complete opposite to Souma's change of behavior). He is later revealed to hold Souma's Holy Grail, Souma's Ultimate Soul Blade. The one who holds his Holy Grail is Xecty. In the end, he chooses to return to his own world, together with Kureha and Seena.", "doc_id": "8e77ec1a-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Miguel_Miram%C3%B3n", "document": "Miguel Gregorio de la Luz Aten\u00f3genes Miram\u00f3n y Tarelo, known as Miguel Miram\u00f3n, (29 September 1831 \u2013 19 June 1867) was a Mexican conservative general who became president of Mexico at the age of 27 during the Reform War, serving between February 1859 and December 1860. He was the first Mexican president to be born after the Mexican War of Independence.\n\nA cadet in military school at the beginning of the Mexican\u2013American War, Miram\u00f3n saw action at the Battle of Molino del Rey and the Battle of Chapultepec during the American invasion of Mexico City. After the triumph of the liberal Plan of Ayutla in 1855, Miram\u00f3n participated in a series of conservative counter coups until his efforts merged with the wider Reform War, triggered by the Plan of Tacubaya led by F\u00e9lix Zuloaga in 1859, which rejected the liberal Constitution of 1857. The first year of the war was marked by a series of conservative victories led by Miram\u00f3n, where his victories led the press to dub him \"Young Maccabee\". After a moderate faction of conservatives overthrew Zuloaga in an effort to reach a compromise with liberals, a conservative junta of representatives elected Miram\u00f3n as president. Miram\u00f3n would lead the conservatives for the rest of the war, leading two sieges against the liberal capital of Veracruz, where Benito Ju\u00e1rez maintained his role as president of the Second Federal Republic. The second siege failed after the United States Navy intercepted Miram\u00f3n's naval forces, and liberal victories accumulated hereafter, ending the war in 1860. Miram\u00f3n escaped the country and went into exile in Europe, being received at the Spanish court.\n\nHe returned to Mexico in 1862 during the early stages of the Second French intervention, offering his assistance to the Second Mexican Empire. Emperor Maximilian was a liberal and in order to diffuse conservative opposition to the Empire, he sent Miram\u00f3n to Prussia, ostensibly to study military tactics. Miram\u00f3n returned to serve the conservatives, and supported Maximilian until the fall of the Second Mexican Empire in May 1867. The restored Mexican government had Miram\u00f3n, Maximilian and Tomas Mej\u00eda court martialed and sentenced to death. They were shot on June 19, 1867.\n\nDuring the period of La Reforma, Miram\u00f3n participated in the various conservative counter revolutions after the triumph of the liberal Plan of Ayutla in 1855. He joined Antonio de Haro y Tamariz at Zacapoaxtla in 1856, fighting at the head of the 10th and 11th battalions at the Loma de Montero. He saw action at the goteras de Puebla on March 10, but went into hiding when the city fell.\n\nIn October, 1856, he was second in command of a conservative revolt proclaimed at Puebla. With a thousand soldiers, he defended the city for forty three days against an army of six thousand man, causing great damage to the liberal forces. When the city finally fell Miram\u00f3n refused to surrender and instead at the head of one hundred and fifty men fled and took the city of Toluca on January 18, 1857, seizing some artillery and then heading to the town of Temascaltepec where he was wounded and defeated. He was imprisoned, but escaped in September, soon after joining the reactionary forces in the South. As second in command, he captured the city of Cuernavaca and in January 1858 to Mexico City where the Plan of Tacubaya led by F\u00e9lix Zuloaga had overthrown the liberal government of Ignacio Comonfort, also inaugurating what came to be known as the Reform War.\n\nMiram\u00f3n's most important military priority was now the capture of Veracruz. He left the capital on February 16, leading his troops in person along with his minister of war. Meanwhile, Aguascalientes and Guanajuato had fallen to the liberals. Liberal troops in the West were led by Degollado and headquartered in Morelia, which now served as a liberal arsenal. The conservatives meanwhile, feeling the effects of the malarial climate, abandoned the siege of Veracruz by March 29. Degollado made another attempt on Mexico City in early April and was utterly routed in Tacubaya by Leonardo M\u00e1rquez, who captured a large amount of war material, and who also in this battle gained infamy for including medics among those executed in the aftermath of the battle.\n\nOn April 6, the Ju\u00e1rez government was recognized by the United States, and on July 12, the liberal government nationalized the property of the church, and suppressed the monasteries, the sale of which provided the liberal war effort with new funds, though not as much as had been hoped for since speculators were waiting for more stable times to make purchases.\n\nMiram\u00f3n met the liberal forces in November at which a truce was declared and a conference was held on the matter of the Constitution of 1857 and the possibility of a constituent congress. Negotiations broke down, however and hostilities resumed on the 12th after which Degollado was routed at the Battle of Las Vacas.\n\nOn December 14, 1859, the Ju\u00e1rez government signed the Mclane Ocampo Treaty, which granted the U.S. perpetual rights to transport goods across three key trade routes in Mexico, including troops, and granted Americans an element of extraterritoriality. The treaty caused consternation among the conservatives, the European press, and members of Ju\u00e1rez' cabinet, however the issue was rendered moot when the U.S. Senate failed to approve the treaty.\n\nMeanwhile, Miram\u00f3n was preparing another siege of Veracruz, heading out of the capital on February 8, once again leading his troops in person along with his war minister, hoping to rendevouz with a small naval squadron led by the Mexican General Marin, and disembarking from Havana. The United States Navy however had orders to intercept it.\n\nMiram\u00f3n arrived at Medellin on the 2nd of March, and awaited for Marin's attack in order to begin the siege. The American steamer Indianola however had anchored itself near the fortress of San Juan de Ulua, in order to defend Veracruz from attack.\n\nOn March 6, Marin's squadron, composed of the General Miram\u00f3n, and the Marques de la Habana, arrived in Veracruz, and captured by Captain Jarvis of the U.S. Navy. The ships were sent to New Orleans, along with the now imprisoned General Marin, depriving the conservatives of an attacking force and the substantial amount of artillery, guns, and rations that they were carrying on board for delivery to Miram\u00f3n.\n\nMiram\u00f3n's effort to siege Veracruz was abandoned on the 20th of March, and he arrived back in the capital on April 7. The conservatives had also been suffering defeats in the interior losing Aguascalients and San Luis Potosi before the end of April. Degollado was sent into the interior to lead the liberal campaign as their enemies now ran out of resources. He appointed Uraga as Quartermaster General.\n\nMaximilian, Miram\u00f3n, and Mej\u00eda were tried for violating an 1862 Decree passed in the early stages of the French Intervention, against traitors and invaders. After the trial, a unanimous verdict of guilty was brought forth on the night of June 14, and the sentence of death was passed.\n\nAmong those who pled President Ju\u00e1rez to spare their lives was Miram\u00f3n's wife who weeping with her two children, fainted at the foot of the president. Maximilian wrote to his European relatives asking them to take care of Miram\u00f3n's wife and her children.\n\nThe three condemned were led to the Cerro de las Campanas outside of Quer\u00e9taro on the morning of June 19. Miram\u00f3n and Mej\u00eda stood to the side of Maximilian, but the latter then remarked to Miram\u00f3n that \u201ca brave soldier is respected by his sovereign; permit me to yield to you the place of honor,\u201d and Miram\u00f3n was subsequently given the center position. Before being executed he read a brief piece disavowing the charge of traitor. All three were executed at around seven in the morning.", "doc_id": "8e77edd2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Christian_Democracy_(Italy)", "document": "Christian Democracy was a Christian democratic political party in Italy. The DC was founded on 15 December 1943 in the Italian Social Republic (Nazi-occupied Italy) as the ideal successor of the Italian People's Party, which had the same symbol, a crusader shield (scudo crociato). As a Catholic-inspired, centrist, catch-all party comprising both centre-right and centre-left political factions, the DC played a dominant role in the politics of Italy for fifty years, and had been part of the government from soon after its inception until its final demise on 16 January 1994 amid the Tangentopoli scandals. Christian Democrats led the Italian government continuously from 1946 until 1981. The party was nicknamed the \"White Whale\" (Italian: Balena bianca) due to its huge organization and official color. During its time in government, the Italian Communist Party was the largest opposition party.\n\nFrom 1946 until 1994, the DC was the largest party in the Italian Parliament, governing in successive coalitions, including the Pentapartito system. It originally supported liberal-conservative governments, along with the moderate Italian Democratic Socialist Party, the Italian Liberal Party, and the Italian Republican Party, before moving towards the Organic Centre-left involving the Italian Socialist Party. The party was succeeded by a string of smaller parties, including the Italian People's Party, the Christian Democratic Centre, the United Christian Democrats, and the still active Union of the Centre. Former DC members are also spread among other parties, including the centre-right Forza Italia and the centre-left Democratic Party. It was a founding member of the European People's Party in 1976.\n\nThe party's ideological sources were principally to be found in Catholic social teaching, the Christian democratic doctrines developed from the 19th century (see Christian democracy), and on the political thought of Romolo Murri and Luigi Sturzo and ultimately in the tradition of the defunct Italian People's Party. Two Papal encyclicals, Rerum novarum (1891) of Pope Leo XIII, and Quadragesimo anno (1931) of Pope Pius XI, offered a basis for social and political doctrine.\n\nIn economics, the DC preferred competition to cooperation, supported the model of social market economy and rejected the Marxist's idea of class struggle. The party thus advocated collaboration between social classes and was basically a catch-all party which aimed to represent all Italian Catholics, both right-wing and left-wing, under the principle of the \"political unity of Catholics\" against socialism, communism and anarchism. It ultimately represented the majority of Italians who were opposed to the Italian Communist Party. The party was however originally equidistant between the Communists and the hard right represented by the Italian Social Movement.\n\nAs a catch-all party, the DC differed from other European Christian Democratic parties, such as the Christian Democratic Union of Germany that were mainly conservative parties, with DC comprising conservative as well as social-democratic and liberal elements. The party was thus divided in many factions and party life was characterised by factionalism and by the double adherence of members to the party and the factions, often identified with individual leaders.\n\nThe DC was characterised by a number of factions, spanning from left to right and continually evolving.\n\nIn the early years, centrists and liberal-conservatives such as Alcide De Gasperi, Giuseppe Pella, Ezio Vanoni and Mario Scelba led the party. After them, progressives led by Amintore Fanfani were in charge, though opposed by right wing led by Antonio Segni. The party's left wing, with its roots in the left of the late Italian People's Party (Giovanni Gronchi, Achille Grandi and controversial Fernando Tambroni), was reinforced by new leaders such as Giuseppe Dossetti, Giorgio La Pira, Giuseppe Lazzati and Fanfani himself. Most of them were social democrats by European standards.\n\nThe party was often led by centrist figures unaffiliated to any faction such as Aldo Moro, Mariano Rumor (both closer to the centre-left) and Giulio Andreotti (closer to the centre-right). Moreover, often, if the government was led by a centre-right Christian Democrat, the party was led by a left-winger and vice versa. This was what happened in the 1950s when Fanfani was party secretary and the government was led by centre-right figures such as Scelba and Segni and in the late 1970s when Benigno Zaccagnini, a progressive, led the party and Andreotti the government: this custom, in clear contrast with the principles of a Westminster system, deeply weakened DC-led governments, that even with great majorities were de facto unable to conciliate the several factions of the party, and ultimately the office of Prime Minister (defined by the Constitution of Italy as a primus inter pares among ministers), turning the Italian party system into a particracy (partitocrazia).\n\nFrom the 1980s the party was divided between the centre-right led by Arnaldo Forlani (supported also by the party's right wing) and the centre-left led by Ciriaco De Mita (whose supporters included trade unionists and the internal left), with Andreotti holding the balance. De Mita, who led the party from 1982 to 1989, curiously tried to transform the party into a mainstream \"conservative party\" in line with the European People's Party in order to preserve party unity. He was replaced by Forlani in 1989, after that becoming Prime Minister in 1988. Disagreements between de Mita and Forlani brought Andreotti back to prime-ministership from 1989 to 1992.\n\nWith the fall of the Berlin Wall, the end of great ideologies and ultimately the Tangentopoli scandals, the heterogeneous nature of the party led it to its collapse. The bulk of the DC joined the new Italian People's Party (PPI), but immediately several centre-right elements led by Pier Ferdinando Casini joined the Christian Democratic Centre (CCD), while others directly joined Forza Italia. A split from the PPI, the United Christian Democrats (CDU), joined Forza Italia and the CCD in the centre-right Pole of Freedoms coalition (later becoming the Pole for Freedoms), while the PPI was a founding member of The Olive Tree centre-left coalition in 1996.", "doc_id": "8e77eecc-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Evercade", "document": "The Evercade is a handheld game console developed and manufactured by UK company Blaze Entertainment. It focuses on retrogaming with ROM cartridges that each contain a number of emulated games. Development began in 2018, and the console was released in May 2020, after a few delays. Upon its launch, the console offered 10 game cartridges with a combined total of 122 games.\n\nArc System Works, Atari, G-Mode, Interplay Entertainment, Bandai Namco Entertainment and Piko Interactive have released emulated versions of their games for the Evercade. Pre-existing homebrew games have also been re-released for the console by Mega Cat Studios. The Evercade is capable of playing games originally released for the Atari 2600, the Atari 7800, the Atari Lynx, the Intellivision, the NES, the SNES, and the Sega Genesis/Mega Drive, as well as arcade games.\n\nOn 31 May 2022, Blaze Entertainment announced that the console would be discontinued, with the improved Evercade EXP set to release during winter 2022-23.\n\nThe Evercade was developed by the UK-based Blaze Entertainment, which had previously produced Atari-related products and the Game Gadget. Blaze began development of the Evercade in 2018, with the intention of creating a console superior to plug-and-play devices. The Evercade was announced in April 2019, as a portable retrogaming console with the ability to be connected to a television screen. The console would play emulated video games, with a focus on the 8-bit and 16-bit gaming eras.\n\nThe Evercade was initially scheduled to release in the fourth quarter of 2019, before being delayed to 20 March 2020. The release was later pushed back to 22 May 2020, although this was expected to be delayed up to two additional weeks in some areas because of shipping delays, caused by the COVID-19 pandemic. The console retailed for \u00a360/$80 with a pack-in game cartridge, while a premium edition retailed for \u00a380/$100 and included three game cartridges. The console is white and red in color, for a retro appearance like the Nintendo Famicom, although a black edition was also sold in the United Kingdom. Andrew Byatt, the Evercade's development director, hoped to sell hundreds of thousands of units within the first year.\n\nOn 31 May 2022, Blaze announced that it would discontinue the Evercade in favor of an upgraded version known as the Evercade EXP.\n\nThe Evercade has a 1.2 GHz Cortex-A7 processor, and the console uses a Linux base. The Evercade is just over seven inches long. The Evercade has 256 megabytes of RAM. It has a horizontal 4.3-inch LCD screen, with a resolution of 480x272 pixels. The screen uses the 16:9 aspect ratio, as some of the console's games were originally released for systems \u2013 such as the Atari Lynx \u2013 that use a wider screen ratio than 4:3. The player can switch between the two aspect ratios.\n\nLike the Nintendo Switch, the Evercade can be connected to a television, however with a mini-HDMI cable, as opposed to a normal HDMI output. The Evercade offers a television output of 720p, and supports high-definition upscaling on all games when the console is connected to a television. The console has a rechargeable 2,000-mAh battery that lasts four to five hours. A 3.5 minijack for headphones is located on the bottom of the console, along with two volume controls. The cartridge slot, power button, and the mini-HDMI port are located on the top of the system. A MicroUSB port is used for charging the battery. Unlike modern handheld consoles, the Evercade does not have a touch screen or Wi-Fi connectivity.\n\nBlaze Entertainment developed 20 versions of the Evercade D-pad before settling on a final version. The design is based on the D-pads featured on the Sega Genesis/Mega Drive and Sega Saturn controllers. Aside from the D-pad, the console includes four action buttons on the front and two trigger buttons on top. It also has \"menu\", \"select\" and \"start\" buttons. The layout of the four action buttons was determined after Blaze conducted an online poll, which found that 68 percent of people wanted a layout like those used on modern game controllers. However, this created confusion, as in-game prompts do not always match the buttons (a player may need to press \"B\" when prompted to press \"A\"). As the console launched, Blaze released a firmware update for the layout issue, requiring the user to connect the console through USB to the Evercade website.\n\nTwo-player games converted for the Evercade retain the multiplayer function, with the intention that future hardware will allow two players. The addition of Bluetooth had been considered as a way to add multiplayer, but the development team scrapped this idea because of cost and complexity, which did not go well with the console's focus. At the end of 2019, before the Evercade's release, Blaze was already working on a second version with multiplayer capability and a possible, easier alternative for connecting the console to a television.\n\nEvercade games are distributed on multi-game ROM cartridges, each one usually containing between 5 and 20 games, although 2 of the cartridges contain only 2 or 3 games. Evercade cartridges support the ability to save a game, a modern feature not usually present in older games. The Evercade's use of game cartridges was considered unique, as most retro handheld consoles used built-in or downloaded game ROMs. Unlike other retro consoles, the goal for the Evercade was to provide retrogamers a chance to build a collection of physical games. Cartridges, clamshell packaging, and paper instruction manuals were part of the effort to appeal to retrogamers, as digital game downloads had become common in recent years. Cartridges and their packaging are numbered to encourage collecting. Evercade cartridges are white in color, and are similar in size to Game Boy and Game Gear cartridges.", "doc_id": "8e77f00c-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Westside_(New_Zealand_TV_series)", "document": "Westside is a New Zealand comedy drama television series created by Rachel Lang and James Griffin[2] for South Pacific Pictures. It is a prequel to Outrageous Fortune, and chronicles the lives of Ted and Rita West. The show aired from 31 May 2015 to 16 November 2020 on Three.\n\nSeries 4 premiered on 9 July 2018. On 21 July 2018 NZ on Air announced funding for a fifth series which will consist of 10 episodes. On 19 July 2019, NZ on Air announced funding for a sixth and final series of Westside.\n\nThe first series is set in the 1970s, it features a Westie couple, and stars Antonia Prebble and David de Lautour as Rita and Ted West. In the first episode, set in 1974, it features John Walker beating Rod Dixon in the 1500 metres at the 1974 Commonwealth Games. Each episode covers one year, from 1974 to 1979, with events like the Muldoon election, dawn raids on overstayers, carless days, and the birth of the punk rock scene in Auckland.\n\nThe second series is set in 1981, and follows the Springbok Tour. The series starts with Rita returning home from prison to find the West household in disrepair. Throughout the course of the series Ted's gang plots to steal from the South Africans visiting New Zealand, preventing them from buying land, while Rita plans a job against developer Evan Lace.\n\nThe third series is set in 1982 and deals with the fallout from the Evan Lace job and Wolf's first meeting with his future wife, Cheryl.\n\nIn July 2014, NZ on Air approved funding of NZ$4.8 million for the miniseries. On 28 July 2015, NZ on Air approved funding of NZ$7.6 million for a second series, on 2 August 2016 NZ$6.6 million was approved for a third series and on 24 July 2017 NZ$6.5 million was approved for a fourth series. In September an additional NZ$1.2 million was approved for series four.\n\nFilming for series one commenced on 12 October and concluded on 17 December 2014. Filming for series two commenced on 27 September 2015 and concluded on 19 January 2016. Filming for series three commenced on 30 October 2016 and concluded on 3 February 2017. Filming for series four commenced on 19 November 2017 and concluded on 18 March 2018.", "doc_id": "8e77f08e-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Classification_of_ethnicity_in_the_United_Kingdom", "document": "A number of different systems of classification of ethnicity in the United Kingdom exist. These schemata have been the subject of debate, including about the nature of ethnicity, how or whether it can be categorised, and the relationship between ethnicity, race, and nationality.\n\nThe 1991 UK census was the first to include a question on ethnicity. Field trials had started in 1975 to establish whether a question could be devised that was acceptable to the public and would provide information on race or ethnicity that would be more reliable than questions about an individual's parents' birthplaces. A number of different questions and answer classifications were suggested and tested, culminating in the April 1989 census test. The question used in the later 1991 census was similar to that tested in 1989, and took the same format on the census forms in England, Wales and Scotland. However, the question was not asked in Northern Ireland. The tick-boxes used in 1991 were \"White\", \"Black-Caribbean\", \"Black-African\", \"Black-Other (please describe)\", \"Indian\", \"Pakistani\", \"Bangladeshi\", \"Chinese\" and \"Any other ethnic group (please describe)\".\n\nSociologist Peter J. Aspinall has categorised what he regards as a number of \"persistent problems with salient collective terminology\". These problems are ambiguity in respect of the populations that are described by different labels, the invisibility of white minority groups in official classifications, the acceptability of the terms used to those that they describe, and whether the collectivities have any substantive meaning.\n\nThe police services of the UK began to classify arrests in racial groups in 1975, but later replaced the race code with an Identity Code (IC) system.\n\nOne of the recommendations of the Stephen Lawrence Inquiry was that people stopped and searched by the police should have their self-defined ethnic identity recorded. In March 2002, the Association of Chief Police Officers proposed a new system for self-definition, based on the 2001 census. From 1 April 2003, police forces were required to use this new system. Police forces and civil and emergency services, the NHS and local authorities in England and Wales may refer to this as the \"6+1\" system, named after the 6 classifications of ethnicity plus one category for \"not stated\".\n\nThe IC classification is still used for descriptions of suspects by police officers amongst themselves, but does risk incorrectly identifying a victim, a witness or a suspect compared to that person's own description of their ethnicity. When a person is stopped by a police officer exercising statutory powers and asked to provide information under the Police and Criminal Evidence Act, they are asked to select one of the five main categories representing broad ethnic groups and then a more specific cultural background from within this group. Officers must record the respondent's answer, not their own opinion. The \"6+1\" IC code system remains widely used, when the police are unable to stop a suspect and ask them to give their self-defined ethnicity.\n\nThe Department for Education's annual school census collects data on pupils in nurseries, primary, middle, secondary and special schools. This includes ethnicity data for pupils who are aged 5 or over at the beginning of the school year in August. The guidance notes on data collection note that ethnicity is a personal, subjective awareness, and that pupils and their parents can refuse to answer the ethnicity question. The codes used are based on the categories used in the 2001 UK census, with added \"Travellers of Irish heritage\", \"Gypsy/Roma heritage\" and \"Sri Lankan Other\" categories. If these codes are judged to not meet local needs, local authorities may use a Department for Education approved list of extended categories. The National Pupil Database attempts to match pupils' educational attainment to their characteristics gathered in the school census, including ethnicity. However, according to HM Inspectorate for Education and Training in Wales, the database contains data inaccuracies. A few of the local authorities and schools had also never accessed the repository, and some of these institutions were unaware of its existence. The NPD was also used least by the majority of local authorities and schools, with 65 percent deeming this method of educational data analysis to be of limited use, about 23 percent considering it to be fairly useful, and only around 11 percent regarding it as being very useful. Most schools and local authorities instead used the Welsh Assembly Government's national free school meal (FSM) benchmark data, which ranks a school's performance relative to other groups of schools with comparable free school meal levels. Around 55 percent of schools and local authorities deemed the benchmark data very useful, 35 percent considered it fairly useful, and only about 10 percent regarded it as being of limited use. Additionally, researchers conducting analysis for the London Borough of Lambeth have argued that broad ethnic groupings such as \"black African\" or \"white other\" can hide significant variation in educational performance, so they instead recommend the use of language categories.\n\nThe ethnic group categories used in the National Health Service in England are based on the 2001 census. It has been argued that this causes problems, as other agencies such as social services use the newer 2011 census categories. In Scotland, the 2011 Scottish census categories are now used. In 2011, Scotland started to record ethnicity on death certificates, becoming the first country in the world to do so. Ethnicity data is not routinely recorded on birth certificates in any part of the UK.\n\nWhether the official UK ethnic group classifications are useful for research on health is the subject of debate. Peter Aspinall argues that the 2001 census categories fail to adequately break down the \"white\" group and ignore ethno-religious differences between South Asian groups, amongst other issues. Writing in the Journal of Epidemiology and Community Health, Charles Agyemang, Raj Bhopal and Marc Bruijnzeels argue that: \"The current groupings of African descent populations in the USA and the UK such as Black, Black African, and African American hide the huge heterogeneity within these groups, which weakens the value of ethnic categorisation as a means of providing culturally appropriate health care, and in understanding the causes of ethnic differences in disease. Such broad terms may not fit with self definition of ethnicity\"", "doc_id": "8e77f1c4-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Academic_journal_publishing_reform", "document": "Academic journal publishing reform is the advocacy for changes in the way academic journals are created and distributed in the age of the Internet and the advent of electronic publishing. Since the rise of the Internet, people have organized campaigns to change the relationships among and between academic authors, their traditional distributors and their readership. Most of the discussion has centered on taking advantage of benefits offered by the Internet's capacity for widespread distribution of reading material.\n\nAlthough it has some historical precedent, open access became desired in response to the advent of electronic publishing as part of a broader desire for academic journal publishing reform. Electronic publishing created new benefits as compared to paper publishing but beyond that, it contributed to causing problems in traditional publishing models.\n\nThe premises behind open access are that there are viable funding models to maintain traditional academic publishing standards of quality while also making the following changes to the field:\n\n1. Rather than making journals be available through a subscription business model, all academic publications should be free to read and published with some other funding model. Publications should be gratis or \"free to read\".\n2. Rather than applying traditional notions of copyright to academic publications, readers should be free to build upon the research of others. Publications should be libre or \"free to build upon\".\n3. Everyone should have greater awareness of the serious social problems caused by restricting access to academic research.\n4. Everyone should recognize that there are serious economic challenges for the future of academic publishing. Even though open access models are problematic, traditional publishing models definitely are not sustainable and something radical needs to change immediately.\n\nOpen access also has ambitions beyond merely granting access to academic publications, as access to research is only a tool for helping people achieve other goals. Open access advances scholarly pursuits in the fields of open data, open government, open educational resources, free and open-source software, and open science, among others.\n\nThe motivations for academic journal publishing reform include the ability of computers to store large amounts of information, the advantages of giving more researchers access to preprints, and the potential for interactivity between researchers.\n\nVarious studies showed that the demand for open access research was such that freely available articles consistently had impact factors which were higher than articles published under restricted access.\n\nSome universities reported that modern \"package deal\" subscriptions were too costly for them to maintain, and that they would prefer to subscribe to journals individually to save money.\n\nPublishers state that if profit was not a consideration in the pricing of journals then the cost of accessing those journals would not substantially change. Publishers also state that they add value to publications in many ways, and without academic publishing as an institution these services the readership would miss these services and fewer people would have access to articles.\n\nCritics of open access have suggested that by itself, this is not a solution to scientific publishing's most serious problem \u2013 it simply changes the paths through which ever-increasing sums of money flow. Evidence for this does exist and for example, Yale University ended its financial support of BioMed Central's Open Access Membership program effective July 27, 2007. In their announcement, they stated: The libraries\u2019 BioMedCentral membership represented an opportunity to test the technical feasibility and the business model of this open access publisher. While the technology proved acceptable, the business model failed to provide a viable long-term revenue base built upon logical and scalable options. Instead, BioMedCentral has asked libraries for larger and larger contributions to subsidize their activities. Starting with 2005, BioMed Central article charges cost the libraries $4,658, comparable to single biomedicine journal subscription. The cost of article charges for 2006 then jumped to $31,625. The article charges have continued to soar in 2007 with the libraries charged $29,635 through June 2007, with $34,965 in potential additional article charges in submission.\n\nOpponents of the open access model see publishers as a part of the scholarly information chain and view a pay-for-access model as being necessary in ensuring that publishers are adequately compensated for their work. \"In fact, most STM [Scientific, Technical and Medical] publishers are not profit-seeking corporations from outside the scholarly community, but rather learned societies and other non-profit entities, many of which rely on income from journal subscriptions to support their conferences, member services, and scholarly endeavors\". Scholarly journal publishers that support pay-for-access claim that the \"gatekeeper\" role they play, maintaining a scholarly reputation, arranging for peer review, and editing and indexing articles, require economic resources that are not supplied under an open access model. Conventional journal publishers may also lose customers to open access publishers who compete with them. The Partnership for Research Integrity in Science and Medicine (PRISM), a lobbying organization formed by the Association of American Publishers (AAP), is opposed to the open access movement. PRISM and AAP have lobbied against the increasing trend amongst funding organizations to require open publication, describing it as \"government interference\" and a threat to peer review.", "doc_id": "8e77f304-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Champaign\u2013Urbana_metropolitan_area", "document": "The Champaign\u2013Urbana metropolitan area, also known as Champaign\u2013Urbana and Urbana\u2013Champaign as well as Chambana (colloquially), is a metropolitan area in east-central Illinois. As defined by the Office of Management and Budget (OMB), the metropolitan area has a population of 222,538 as of the 2020 U.S. Census, which ranks it as the 207th largest metropolitan statistical area in the U.S. The area is anchored by the principal cities of Champaign and Urbana, and is home to the University of Illinois Urbana-Champaign, the flagship campus of the University of Illinois system.\n\nAs of March 2020, the OMB defines the metropolitan area (officially designated the Champaign\u2013Urbana, IL MSA) to consist of Champaign County and Piatt County. Until 2018, Ford County was considered a part of the metropolitan area.\n\nJournalists frequently treat the metropolitan area as just one city. For example, in 1998, Newsweek included the Champaign-Urbana Metropolitan Area in its list of the top ten tech cities outside of Silicon Valley. Champaign-Urbana also ranked tenth as one of the top twenty-five green cities in the United States, in a 2007 survey made by Country Home magazine.\n\nA number of major developments have significantly changed downtown Champaign since the beginning of the 21st century. Beginning in the 1990s, city government began to aggressively court development, including by investing millions of dollars in public funds into downtown improvements and by offering developers incentives, such as liquor licenses, to pursue projects in the area. The 9-story M2 on Neil project is such an example. The project began in 2007 by taking down the facade of the deteriorated Trevett-Mattis Banking Co. which previously occupied the building site. The facade was retained on the M2 building. Residents first began to lease space in the M2 in the winter of 2009. The M2 includes not just condos for residential occupation, but also retail and office space in its lower floors, a common trend in new developments in the urban core. Across the street, a 9-story Hyatt Place boutique hotel opened in the summer of 2014. In the Campustown area adjoining the University of Illinois, the new 24-story highrise apartment building 309 Green was ostensibly completed in the fall of 2007 but had partial occupancy at least through the fall of 2008. It is 256 feet (78 m) tall, making it a full 3 stories higher than the older 21-story Tower at Third, the first contribution to the Urbana\u2013Champaign skyline. The Burnham 310 Project, at 18 stories, which is also taller (in overall height), was finished in the fall of 2008 and includes student luxury apartments and a County Market grocery store. Burnham 310 connects downtown Champaign to Campustown. In 2013-14, four other mixed-use buildings (apartments above commercial) have been built in Campustown, with heights of 26, 13, 8, and 5 stories. On the University of Illinois campus, Memorial Stadium has gone under major renovation, with construction of new stands, clubs, and luxury suites. Across Kirby Avenue, the Assembly Hall, first built in 1963 and renamed the State Farm Center as part of a major renovation begun in 2014, continues to be the home of Illinois basketball and has resumed hosting concerts and other performing arts after renovation was completed in late 2016. In the late 2000s, the restoration of the Champaign County Courthouse bell tower capped the expansion and renovation of Courthouse facilities and provided a striking focal point in downtown Urbana. These, among other developments, have given the Twin Cities a more urban feel.\n\nThe Champaign-Urbana Metro area has two hospitals located less than a mile apart near University Avenue in Urbana. The Carle Foundation Hospital, and OSF Heart of Mary Medical Center, with a combined total of over 550 physicians. Both hospitals provide various specialized services, and Carle Hospital currently has a Level III Neonatal Intensive Care Unit, a Level I Trauma Center, and a medical helicopter service. Both hospitals have struggled to maintain their tax-exempt status with the State of Illinois.\n\nCarle Clinic Association was purchased by the Carle Foundation in 2010. It was renamed Carle Foundation Physician Services, and it maintains several locations next to the hospital, as well as other locations within Champaign-Urbana and other East Central Illinois cities. Christie Clinic, another smaller multi-specialty group practice, is headquartered in downtown Champaign. They are largely affiliated with OSF, but not as closely linked as their Carle counterparts are.\n\nBoth hospitals and clinics are affiliated with the University of Illinois College of Medicine at Urbana, part of the larger University of Illinois College of Medicine, which has campuses in Chicago, Peoria, Rockford, and Urbana. The College has a teaching presence at both hospitals, although the facilities are somewhat more extensive at Carle Foundation Hospital.\n\nPiatt County, which is included in the Champaign-Urbana Metro Area, also has a hospital. Kirby Medical Center is a general medical and surgical facility located in Monticello. Both Carle Clinic and Christie Clinic have satellite facilities located at Kirby.\n\nThe Champaign-Urbana Metropolitan Area is home to many theatres. The University is home to three theatre venues; Foellinger Auditorium, Assembly Hall and the Krannert Center for the Performing Arts. While the Assembly Hall is primarily a campus basketball and concert arena, the Krannert Center for the Performing Arts is considered to be one of the nation's top venues for performance and hosts over 400 performances annually. Built in 1969, the Krannert Center's facilities cover over four acres (16,000 sq. m) of land, and features four theatres and an amphitheatre.\n\nThe Historic Virginia Theatre in downtown Champaign is a public venue owned by the city of Champaign and administered by the Champaign Park District. It is best known for hosting Roger Ebert's Film Festival which occurs annually during the last week of April. The Virginia also features a variety of performances from community theatre with the Champaign Urbana Theatre Company, to post box-office showings of popular films, current artistic films, live musical performances (both orchestral and popular), and other types of shows. First commissioned in 1921, it originally served as a venue for both film and live performances, but became primarily a movie house in the 1950s. Occasional live events were held during the 1970s and 1980s, including a live production of \"Oh, Calcutta\" and performances by George Benson, Stevie Ray Vaughan, Missing Persons, and the Indigo Girls. GKC Corporation closed the Virginia as a movie house on February 13, 1992, with the final regular film being Steve Martin's \"Father of the Bride\". The theatre once again began holding regular live performances when it was leased to local gospel singer David Wyper in 1992. The Champaign-Urbana Theatre Company was formed to perform major musicals and opened their first season with \"The Music Man\" that June. Control passed to the Virginia Theatre group in 1996 and the theatre became a non-profit public venue. The Champaign Park District assumed control of the facilities in 2000. Its original Wurlitzer theatre pipe organ has been maintained by Warren York since 1988 and is still played regularly.\n\nThe Art Theater in downtown Champaign began as Champaign's first theatre devoted to movies, the Park, in 1913, and was a small venue showing films not normally playing at the box office. The theatre was the only single-screen movie theatre with daily operation as a movie theatre in Champaign-Urbana. The theater ceased operations on October 31 of 2019. The Virginia, which hosts Roger Ebert's Annual Overlooked Film Festival, is also single-screen, but only opens for special showings and events. Rapp and Rapp's 1914 Orpheum Theatre closed in the mid-1980s and now houses a children's science museum. Parkland College in Champaign features a small theatre called the Parkland College Theatre and a planetarium called the William M. Staerkel Planetarium.", "doc_id": "8e77f476-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Self-blame_(psychology)", "document": "Self-blame is a cognitive process in which an individual attributes the occurrence of a stressful event to oneself. The direction of blame often has implications for individuals\u2019 emotions and behaviors during and following stressful situations. Self-blame is a common reaction to stressful events and has certain effects on how individuals adapt. Types of self-blame are hypothesized to contribute to depression, and self-blame is a component of self-directed emotions like guilt and self-disgust. Because of self-blame's commonality in response to stress and its role in emotion, self-blame should be examined using psychology's perspectives on stress and coping. This article will attempt to give an overview of the contemporary study on self-blame in psychology.\n\nWhile conceptualizations of stress have differed, the most dominant accounts in current psychology are appraisal-based models of stress. These models define stress as a reaction to a certain type of subjective appraisal, done by an individual, of the circumstances he or she is in. Specifically, stress occurs when an individual decides that a factor in the environment puts demands on the individual beyond his or her current ability to deal with it. The process of rating situations as demanding or nondemanding is called appraisal, and this process can occur quickly and without conscious awareness. Appraisal models of stress are sometimes called \u201cinteractional\u201d because the occurrence of stress depends on an interaction between characteristics of the person, especially goals, and the environmental situation. Only if the individual perceives a situation to threaten his or her goals does stress occur. This structure explains the fact that individuals often differ in their emotional and stress responses when they are presented with similar situations. Stress does not come from events themselves, but from the conflict of the event with an individual's goals. While researchers disagree about the time-course of appraisals, how appraisals are made, and the degree to which individuals differ in their appraisals, appraisal-models of stress are dominant in psychology. Appraisals may occur without conscious awareness. Stress itself is a systemic psychological state that includes a subjective \u201cfeel\u201d and a motivational-component (the individual desires to reduce stress); some researchers consider stress to be a subset of or a closely related system to emotions, which likewise depend on appraisal and motivate behavior.\n\nOnce this appraisal has occurred, actions taken to reduce stress constitute coping processes. Coping can involve changes to the situation-environment relationship (changing the situation or the goals that led to stress appraisal), reducing the emotional consequences of a stress appraisal, or avoiding thinking about the stressful situation. Categorizations of types of coping vary between researchers. Coping strategies differ in their effects on subjective well-being; for example, positive reappraisal is consistently found to be a correlate of higher subjective-well being, while distraction from stressors is typically a negative correlate of well-being. Coping behaviors constitute the moderating factor between events and circumstances on one hand and psychological outcomes, like well-being or mental disorders, on the other. Causal attributions of the event are a way to deal with the stress of an event, and so self-blame is a type of coping. During and after traumatic events, individuals\u2019 appraisals affect how stressful the event is, their beliefs on what caused the event, meanings they may derive from the event, and changes they make in their future behavior.\n\nA classification of self-blame into characterological and behavioral types has been proposed to distinguish whether individuals are putting blame on changeable or unchangeable causes. This division, first proposed by Janoff-Bulman, defines behavioral self-blame (BSB) as causal attribution of an event's occurrence to specific, controllable actions that the individual took. Characterological self-blame (CSB), on the other hand, is attribution of blame to factors of the self that are uncontrollable and stable over time (e.g. \u201cI am the type of person that gets taken advantage of\u201d). CSB attributions are harder to change than behavioral attributions of blame. The development of these categories comes from observation of depressed individuals; sufferers often display feelings of helplessness and lack of control while simultaneously blaming their choices for negative occurrences, resulting in the so-called \u201cparadox of depression\u201d. From an outside perspective, it would seem that blaming one's actions implies that the individual can choose better in the future. However, if this blame is towards uncontrollable characteristics (CSB), not choosable actions (BSB), the factors resulting in a negative outcome were uncontrollable. BSB and CSB are thus proposed to be activities that, while related, are distinct and differ in their effects when used as coping processes.\n\nEmpirical findings support the existence of the behavioral/characterological distinction in self-blame. For one, BSB is much more common than CSB Tilghman-Osbourne, 2008) A factor analyses of individuals\u2019 attributions of blame and their ability to predict psychological symptoms have identified two clusters of self-blame: a factor of blame for the type of victim, correlated with self-contempt and self-disgust; and a factor of blame towards poor judgment or choices of the victim, correlated with guilt. These factors closely correspond to CSB and BSB definitions, and so the study provides some theoretical support that individuals assign self-blame differently to unchoosable characteristics and choices they have made. Research has also compared CSB and BSB to moral emotions that individuals have, such as guilt and shame. CSB and shame had convergent validity to predict depressive symptoms in adolescents. On the other hand, guilt and BSB did not show convergent validity, and some evidence suggests further subtypes of guilt and BSB. Factor analysis of adolescents self-blame from bullying showed differences between attributions of CSB and BSB.\n\nHowever, though distinct types of self-blame have been identified, evidence distinguishing their effectiveness as coping have been mixed. Evidence on the effects of BSB is mixed. Both CSB and BSB predicted depressive symptoms in rape victims, though CSB also had a higher relationship with future fear, and both types correlated positively with symptoms of psychological disorder in domestic abuse victims. CSB mediated the relationship between bullying victimization and anxiety, loneliness, and low-self worth in middle-school students, while BSB had no positive or negative effect on well-being. Other studies did not find significant effects of self-blame on psychological outcomes. One study found that BSB and CSB had a concurrent relationship with depressive symptoms, but no role to predict depressive symptoms in the future, while another found that only CSB concurrently correlated with depressive symptoms. One study of Ullman and colleagues found no effect of CSB to predict PTSD or depressive symptoms from sexual abuse. Parents of children killed by sudden infant death syndrome showed no predictive relationship of BSB or CSB and future distress.\n\nMany studies, including recent ones, continue to treat self-blame as a unified factor. Studies that conflate the terms of self-blame tend to find negative psychological impacts; the notable exception is the seminal Bulman & Wortman study of accident paralysis victims, which noted the adaptive effect of self-blame to improve victims\u2019 recovery.", "doc_id": "8e77f5de-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Live_After_Death", "document": "Live After Death is a live album by English heavy metal band Iron Maiden, originally released in October 1985 on EMI in Europe and its sister label Capitol Records in the US (it was re-released by Sanctuary/Columbia Records in the US in 2002 on CD and by Universal Music Group/Sony BMG Music Entertainment on DVD). It was recorded at Long Beach Arena, California and Hammersmith Odeon, London during the band's World Slavery Tour.\n\nThe video version of the concert only contains footage from the Long Beach shows. It was initially released though Sony as a \"Video LP\" on VHS hi-fi stereo and Beta hi-fi stereo with 14 songs and no special features and was reissued on DVD on 4 February 2008, which coincided with the start of the band's Somewhere Back in Time World Tour. In addition to the complete concert, the DVD features Part 2 of The History of Iron Maiden documentary series, which began with 2004's The Early Days and continued with 2013's Maiden England '88, documenting the recording of the Powerslave album and the following World Slavery Tour.\n\nIron Maiden's World Slavery Tour began in Warsaw, Poland on 9 August 1984 and lasted 331 days, during which 187 concerts were performed. To tie in with their 1984 album, Powerslave, the tour's stage show adhered to an ancient Egyptian theme, which was decorated with sarcophagi and Egyptian hieroglyphs, and mummified representations of the band's mascot, Eddie, in addition to numerous pyrotechnic effects. The theatricality of the stage show meant that it would become one of the band's most acclaimed tours, making it the perfect backdrop to their first live double album and concert video.\n\nThe double LP was also recorded at Long Beach, although side four contains tracks recorded at Hammersmith Odeon, London on 8, 9, 10 and 12 October 1984. Bassist Steve Harris has stated that, even if they had had the time, they would not have added any studio overdubbing to the soundtrack: \"We were really anti all that, anyway. We were very much, like, 'This has got to be totally live,' you know?\"\n\nThe album has received consistent critical praise, with reviewers hailing it one of the genre's best live albums. For the band, the release was advantageous as it meant they could delay the recording of their next studio album, 1986's Somewhere in Time. Time off was beneficial for the band, who desperately needed to recuperate following the World Slavery Tour's heavy schedule.\n\nLive After Death has been highly rated by critics since its release; Kerrang! and Sputnikmusic both agree that it is \"possibly the greatest live album of all time\", while AllMusic describes it as \"easily one of heavy metal's best live albums\".\n\nSputnikmusic argues that it is the band's best live album, concluding that \"Iron Maiden's 1985 release has everything you could ask for. With, exciting renditions of classic songs, and brilliant performances, Live After Death is quite a fun listen.\" PopMatters describes it as \"a searing, 102-minute collection of Maiden at [their] peak ... an absolute treasure for fans [which] went on to be universally regarded as an instant classic in the genre\".\n\nThe album's video counterpart received similar critical acclaim, with AllMusic stating that \"Live After Death is a visual pleasure as much as a sonic one. The elaborate staging and lighting effects are excellent. The editing is superb as well with very few rapid-fire, seizure-inducing camera cuts\". The bonus features included in the 2008 DVD reissue were also praised by PopMatters, Kerrang! and About.com.\n\nThe album has also been described by Classic Rock as \"the last great live album of the vinyl era.\"", "doc_id": "8e77f6b0-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/The_Humans_(video_game)", "document": "The Humans is a puzzle-platform video game developed by Imagitec Design in Dewsbury, England and originally published by Mirage Technologies for the Amiga in May 1992. It was later ported to other home computers and consoles. The goal of the game varies per level but usually revolves around bringing at least one of the player-controlled humans to the designated end area marked by a colored tile. Doing this requires players taking advantage of the tribe's ability to build a human ladder and use tools such as spears, torches, wheels, ropes and a witch doctor in later levels.\n\nThe Humans was conceived by Rodney Humble during his time working with Imagitec Design as a project for the Atari Lynx spawning a trilogy based upon the human evolution inspired by Psygnosis' Lemmings, creating and drawing his ideas before transferring the design work to Imagitec programmers in developing them further, serving as the first game to be published by MicroProse offshoot Mirage, while Atari Corporation liked the title and commissioned two additional conversions for their platforms.\n\nThe Humans was very well received by video game magazines and garnered praise for the originality, presentation and audio upon its initial Amiga launch. Other versions of the game have been met with a more mixed reception from critics and reviewers alike. It was followed by three sequels: The Humans: Insult to Injury in 1992, Humans 3: Evolution - Lost in Time in 1995, and The Humans: Meet the Ancestors! in 2009.\n\nThe Humans is a puzzle game similar to Lemmings whose objective is to manipulate the given number of humans, taking advantage of abilities and tools to achieve the level's goal, usually consisting of finding a certain tool, killing a certain number of dinosaurs or bringing at least one human to the end point, marked by a conspicuous colored tile. Each level is independent of the next, each with its own tools, goal, and set number of humans allowed per level. The only things that carry from level to level are the total number of humans in the player's tribe and the player's total score.\n\nThe player controls one human at a time, and may switch between any human at any time. In order to complete a level, it is often necessary to use certain tools or abilities, such as stacking to reach a high ledge. For instance, the spear, a tool obtained in the first level of the game, may be thrown across gaps to other humans, used to jump chasms, thrown to kill dinosaurs or other enemies, or brandished to hold off dinosaurs temporarily. Certain levels also feature NPCs like the pterodactylus that can be ridden in order to reach otherwise unreachable platforms, that cannot be controlled, but can be used to the player's advantage. Several forms of enemy appear and can range from dinosaurs that eat a human if he is unarmed and within its walking range to spear-wielding members of enemy tribes.\n\nThere can be up to eight controllable humans in a level, though some levels only allow as few as three. Though there is a preset number of humans allowed per level, there is no limit to how many humans are in the player's tribe. If a human dies, he is replaced by one from the tribe as long as there are humans there to replace him. During the course of the game, the player is given chances to rescue other humans and add them to his tribe. If there are fewer humans in the player's tribe than the minimum required number for any given level, the game is over. Each level, however, has a password that can be used to jump to that particular level from the beginning of the game.\n\nThe Humans was the creation of former Imagitec Design designer Rodney Humble during his time working at the company in Dewsbury who, inspired by Psygnosis' Lemmings and its puzzle elements, created and drew his ideas on storyboards before transferring his work to the Imagitec programmers, developing them further into a trilogy based upon the human evolution. Coding on the project started in December 1991, with Suspicious Cargo programmer David Lincoln being responsible for the Amiga version, although design work originally started on the Atari Lynx under the working titles Dino Dudes and Dino World. Atari Corporation reportedly liked the game and commissioned Imagitec with two additional conversions for their Atari Falcon and Atari Jaguar platforms respectively.\n\nThe Humans' creation process was overseen by co-producers Martin Hooley and Simon Golding, the latter of which oversaw all versions of the game. Golding stated that the production was inspired by Lemmings instead of being \"a rip-off\" but focusing towards \"bigger graphics\", a cartoon-esque feeling reminiscent of short films like Tom and Jerry, more varied levels, among other features. Lincoln employed Cross Products' SNASM programming tool to write the code on an editor using a PC before porting it to Amiga for testing. Artists Andrew Gilmour and Michael Hanrahan drew the pixel art, while composers Barry Leitch and Ian Howe were responsible for the soundtrack. Other members at Imagitec were also involved in the title's production across every subsequent version released.", "doc_id": "8e77f78c-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Fasih_Bokhari", "document": "Admiral Fasih Bokhari (8 March 1942 \u2013 24 November 2020) was a Pakistani admiral who served as the Chief of Naval Staff from 1997 to 1999. He was a well-known pacifist and a prominent political figure as the Chief of Naval Staff from 1997 until his voluntary resignation in 1999, which stemmed from his staunch opposition to the then-Pakistani President Pervez Musharraf's instigation of the Kargil War with India, a conflict that Bokhari reportedly saw as an act of inappropriate and uncoordinated aggression from Pakistan and one that subsequently led him into a bitter dispute with Musharraf. Bokhari also served as the chairman of the National Accountability Bureau, a Pakistani anti-corruption agency.\n\nIn 1999, Bokhari publicly disagreed and revolted against the decision of then-Prime Minister Nawaz Sharif to extend Pervez Musharraf's tenure as the Chairman of the Joint Chiefs of Staff Committee preceding the latter's supersession as the Chief of Army Staff. He is notable for his war opposition stance, having called for public introspections about Musharraf's decisions related to the 1999 Kargil War in 2000.\n\nIn 2011, Bokhari was appointed as the chairman of the National Accountability Bureau by President Asif Ali Zardari. However, his appointment was mired in public controversies, leading to his eventual removal by the Supreme Court of Pakistan in 2013.\n\nIn 2007, Bokhari became the President of the Pakistan Ex Servicemen Association which he remained until 2010 before becoming the Convenor of The Save Pakistan Coalition in 2010.\n\nOn 17 October 2011, Bokhari was appointed Chairman of the National Accountability Bureau by the then-President Asif Ali Zardari, which the president also confirmed his appointment. His appointment was met with the controversy when then-Opposition leader Nisar Ali Khan raised objection to the nomination on technical grounds but was rejected by the President Zardari. In 2012, he vowed to eliminate the corruption and maintained that the NAB should adapt to eliminate corruption from the country.\n\nFollowing his appointment, Admiral Bokhari's appointment was challenged by then-Opposition leader Ali Khan after he submitted a complaint at the Supreme Court of Pakistan on technicality. In 2013, Senior Justice T.H. Jillani declared the Bokhari's appointment as \"null and void.\" On 28 May 2013, President Zardari approved the summary that officially terminated Fasih Bokhari's appointment as chairman of NAB.\n\nAfter his famous revolt and resignation, Admiral Bokhari began his political activism aimed towards peace between two countries and showed opposition towards wars. In 2002 and again in 2011, Admiral Bokhari pressed for constituting a commission that would introspect the events that led to the Kargil War and showed his willingness to testify before an inquiry commission that would be formed by the government of the day. His support for forming an inquiry commission was supported by then-air chief PQ Mehdi, Lieutenant-General Gulzar Kiyani (DGMI), Lieutenant-General Tauqeer Zia (DGMO), Lieutenant-General Shahid Aziz (DG ISI Analysis Wing), and Lieutenant-General Abdul Majeed Malik.\n\nAfter the Kargil War and coup d'\u00e9tat in 1999, followed by the military standoff between two nations, Admiral Bokhari became politically active in supporting peace and expressing opposition to war by pressing towards the idea of resolving any possible sources of future conflict at sea.\n\nThe Indian Navy's former Chief of the Naval Staff Admiral J.G. Nadkarni recently opined that Pakistan had sensible mariners in decision-making positions who were keen to have agreements with the Indian Navy. Admiral Fasih Bokhari, Pakistan's naval chief from 1997 to 1999, was a great proponent of maritime co-operation with India and believed that it would benefit both countries.\"\n\nFrom 2010\u20132011, Admiral Bokhari wrote column based on defence and strategic strategies for the English-language newspaper, Express Tribune, where he focused on peaceful coexistence with India and balanced relations with the United States and Afghanistan.\n\nIn 2002, Admiral Bokhari quoted that: he knew about General Musharraf\u2019s plans to topple Prime Minister Nawaz Sharif and did not want to be part of these \"Dirty Games\". Admiral Bokhari also noted that a power struggle between an elected Prime Minister and appointed-Chairman joint chiefs ensued and relations were severely damaged after the Kargil war.\n\nBefore enforcing the martial law in 1999 against the elected government, Admiral Bokhari noted: \"The two men could not work together, both were preparing to take active actions against each other. I could see that there now two centers of power on a collision course\". At an informal meeting held at the Navy NHQ in September 1999, Chairman joint chiefs General Musharraf indicated his displeasure with Prime Minister Nawaz Sharif's handling of the country describing Prime Minister Sharif as \"incompetent and incapable of running the country.\" Admiral Bokhari firmly got the impression whether General Musharraf was sounding out to rely on the support from the Navy in the event of the coup and Admiral Bokhari discouraged the Chairman joint chiefs from doing so.\n\nHe contended that the Lahore Declaration process was the best trajectory for Pakistan and should be continued through a political dialogue. He further added that any rupture in the dialogue process would set the country back. Bokhari realised that this meeting was held to secure his support against the elected government", "doc_id": "8e77f8ae-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Indirect_tax", "document": "An indirect tax (such as sales tax, per unit tax, value added tax (VAT), or goods and services tax (GST), excise, consumption tax, tariff) is a tax that is levied upon goods and services before they reach the customer who ultimately pays the indirect tax as a part of market price of the good or service purchased. Alternatively, if the entity who pays taxes to the tax collecting authority does not suffer a corresponding reduction in income, i.e., impact and tax incidence are not on the same entity meaning that tax can be shifted or passed on, then the tax is indirect.\n\nAn indirect tax is collected by an intermediary (such as a retail store) from the person (such as the consumer) who pays the tax included in the price of a purchased good. The intermediary later files a tax return and forwards the tax proceeds to government with the return. In this sense, the term indirect tax is contrasted with a direct tax, which is collected directly by government from the persons (legal or natural) on whom it is imposed. Some commentators have argued that \"a direct tax is one that cannot be charged by the taxpayer to someone else, whereas an indirect tax can be.\"\n\nIndirect taxes constitute a significant proportion of total tax revenue raised by the government. Data published by OECD show that the average indirect tax share of total tax revenue for all member countries in 2018 was 32.7% with standard deviation 7.9%. The member country with the highest share was Chile with 53.2% and at the other end was USA with 17.6%. The general trend in direct vs indirect tax ratio in total tax revenue over past decades in developed countries shows an increase in direct tax share of total tax revenue. Although this trend is also observed in developing countries, the trend is less pronounced there than in developed countries.\n\nTax incidence of indirect taxes is not clear, in fact, statutory (legal) incidence in most cases tells us nothing about economic (final) incidence. The incidence of indirect tax imposed on a good or service depends on price elasticity of demand (PED) and price elasticity of supply (PES) of a concerned good or service. In case the good has an elastic demand and inelastic supply, the tax burden falls mainly on the producer of the good, whereas the burden of the good with an inelastic demand and elastic supply falls mainly on consumers. The only case when the burden of indirect tax falls totally on consumers, i.e., statutory and economic incidence are the same, is when the supply of a good is perfectly elastic and its demand is perfectly inelastic, which is, however, a very rare case. The shifting of the tax incidence may be both intentional and unintentional. In fact, economic subject may shift the tax burden to other economic subject by changing their market behavior. For example, tax imposed on the output of a firm's good may lead to higher consumer prices, reduced wages paid to firm's employees and reduced returns to firm's owners and shareholders or reduced supply of the good on the market, or any combination of mentioned consequences.\n\nIndirect taxes have substantial regressive impact on the distribution of income since indirect tax is usually imposed on goods and services irrespective of consumer's income. In practice, the effective indirect tax rate is higher for individuals with lower income, meaning that an individual with lower income spends on a good or service higher proportion of their income than an individual with higher income. For example, consider a good with $100 sales tax imposed on it. An individual with the income $10,000 pays 1% of their income as the tax while a poorer individual with income $5,000 pays 2% of their income. Moreover, the regressivity of indirect tax system impacts the total progressivity of tax systems of countries given the importance of indirect tax revenues in government budget and the degree of regressivity of indirect tax system, which ranges among countries. Furthermore, the extent of regressive nature of an indirect tax depends on the type of indirect tax. Empirical evidence suggests that excise taxes are in general more regressive than VAT. This could be attributed to the fact that excise taxes are levied on goods such as alcohol, tobacco, and these goods comprise a higher share of budgets among poorer households, while at the same time, poorer households are likely to consume goods with reduced VAT rates given that in some countries there is a VAT deduction on necessities such as food and medicine. As a result of the regressive nature of indirect taxes and the fact that they tend to be unresponsive to economic conditions, they cannot act as automatic stabilizers within the economy, unlike some of direct taxes.\n\nIndirect taxes, specifically excise taxes, are attractive because they have a corrective nature. Such taxes raise revenue and at the same time they correct a market failure through increasing price of the good and thus decreasing its consumption. Therefore, less revenue needs to be raised through other taxes, which could be more distortionary, to cover the market failure. The economy benefits from the reduced extent of the negative externality and from lower reliance on other taxes that distort production. Apart from generating revenue and reducing the consumption of goods creating negative externalities, excise taxes can be tailored to impose tax burdens on those who cause the negative externality or those who benefit from government services. Examples are petrol tax, argued to be user fees for the government provided roads, and tobacco tax, imposed on smokers who through smoking create a negative externality of consumption. The design of such excise tax determines the consequences. Two main types of excise taxes are specific tax (tax imposed as fixed amount of money per unit) and ad valorem tax (tax imposed as the percentage of the price of a good). Specific and ad valorem taxes have identical consequences in competitive markets apart from differences in compliance and enforcement. As for imperfectly competitive markets, such as the cigarette market, ad valorem taxes are arguably better as they automatically produce higher per-unit taxes when firms reduce production to increase prices, whereas specific taxes need to be readjusted in this case, which is administratively and legislatively difficult process.", "doc_id": "8e77f994-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Closed_communion", "document": "Closed communion is the practice of restricting the serving of the elements of Holy Communion (also called Eucharist, The Lord's Supper) to those who are members in good standing of a particular church, denomination, sect, or congregation. Though the meaning of the term varies slightly in different Christian theological traditions, it generally means that a church or denomination limits participation (with respect to the Eucharist) either to members of their own church, members of their own denomination, or members of some specific class (e.g., baptized members of evangelical churches). This restriction is based on various parameters, one of which is baptism.\n\nA closed-communion church is one that excludes certain individuals (it specifically identifies) from receiving the communion. This standard varies from church to church. This is the known practice of most traditional churches that pre-date the Protestant Reformation. Other churches following the Protestant Reformation have their own rules of restrictions. In current churches of various denominations, across the spectrum, the rules of participating in the Eucharist are varied.\n\nChurches which practice open communion allow all Christians to partake in the Lord's Supper, with membership in a particular Christian community not required to receive bread and wine; this in contrast to pre-Reformation churches, which hold that what is received in their celebrations ceases to be bread and wine.\n\nThe Roman Catholic Church practices closed communion. However, provided that \"necessity requires it or true spiritual advantage suggests it\" and that the danger of error or indifferentism is avoided, canon 844 of the 1983 Code of Canon Law of the Latin Church and the parallel canon 671 allow, in particular exceptional circumstances that are regulated by the diocesan bishop or conference of bishops, members who cannot approach a Catholic minister to receive the Eucharist from ministers of churches that have a valid Eucharist. It also permits properly disposed members of the Eastern churches who are not in full communion with the Roman Church (Eastern Orthodox Church, Oriental Orthodoxy and Assyrian Church of the East), and of churches judged to be in the same situation with regard to the sacraments to receive the Eucharist from Catholic ministers, if they seek it of their own accord. The Catholic Church distinguishes between Churches whose celebration of the Eucharist, as well as holy orders, it recognizes as valid and those of other Christian communities. In the case that it is impossible to approach a Catholic minister, that it is a case of real need or spiritual benefit, and that the danger of error or indifferentism is avoided, the Catholic Church permits its faithful to receive Communion in Orthodox Churches, although Orthodox Churches do not honour this and only permit Orthodox Christians to receive Communion in Orthodox Churches. The Catholic Church does not ordinarily allow a Catholic to receive communion in a Protestant church, since it does not consider Protestant ministers to be priests ordained by bishops in a line of valid succession from the apostles, although Moravians, Anglicans and some Lutherans teach that they ordain their clergy in lines of apostolic succession. It applies this rule also to the Anglican Communion, pursuant to Apostolicae curae, a position that the Church of England disputed in Saepius officio.\n\nThe Eastern Orthodox Church, comprising 14 to 17 autocephalous Orthodox hierarchical churches, is even more strictly a closed-communion Church. Thus, a member of the Russian Orthodox Church attending the Divine Liturgy in a Greek Orthodox Church will be allowed to receive communion and vice versa but, although Protestants, non-Trinitarian Christians, or Catholics may otherwise fully participate in an Orthodox Divine Liturgy, they will be excluded from communion. In the strictest sense, non-Orthodox may be present at the Divine Liturgy only up to the exclamation \"The doors! The doors!\" and ought to leave the church after that. However, this attitude has been relaxed in most Orthodox churches; a non-communicant may stay and participate in the Divine Liturgy but may not partake of the Eucharist. Thus, while in certain circumstances the Catholic Church allows its faithful who cannot approach a Catholic minister to receive the Eucharist from an Eastern Orthodox priest, the Eastern Orthodox Church does not admit them to receive the Eucharist from its ministers. At the very end of the Divine Liturgy, all people are invited to come up to receive a little piece of bread, called antidoron, which is blessed but not consecrated, being taken from the same loaf as the bread used in the consecration. Non-Orthodox present at the Liturgy are not only permitted but even encouraged to receive the blessed bread as an expression of Christian fellowship and love.\n\nConfessional Lutheran churches, including the Lutheran Church\u2013Missouri Synod and the Wisconsin Evangelical Lutheran Synod, practice closed communion and require catechetical instruction for all people before receiving the Eucharist. Failing to do so is condemned by these Lutherans as the sin of unionism. This teaching comes from 1 Corinthians 10:16-17 which says, \"Is not the cup of thanksgiving for which we give thanks a participation in the blood of Christ? And is not the bread that we break a participation in the body of Christ? Because there is one loaf, we, who are many, are one body, for we all share the one loaf\" and Paul's teaching of fellowship in 1 Corinthians 1:10, \"I appeal to you, brothers and sisters, in the name of our Lord Jesus Christ, that all of you agree with one another in what you say and that there be no divisions among you, but that you be perfectly united in mind and thought.\" These Lutherans also take seriously God's threat in 1 Corinthians 11:27,29 that \"Whoever eats the bread or drinks the cup of the Lord in an unworthy manner will be guilty of sinning against the body and blood of the Lord. A man ought to examine himself before he eats of the bread and drinks of this cup. For anyone who eats and drinks without recognizing the body of the Lord eats and drinks judgment on himself.\" Therefore, the belief is that, inviting those forward who have not been first instructed would be unloving on the church's part, because they would be inviting people forward to sin. This is described as akin to letting someone drink poison without stopping him.\n\nSome Baptists and all American Baptist Association congregations practice closed communion even more strictly than do the Catholic, Lutheran, and Eastern Orthodox churches. They restrict the partaking of communion (or the Lord's Supper) to members of the local church observing the ordinance. Thus members from other churches, even members of other local churches of the same denomination, are excluded from participation. The Strict Baptists in the United Kingdom derive their name from this practice. In the United States the custom is usually, but not exclusively, associated with Landmark ecclesiology.", "doc_id": "8e77fade-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Media_depictions_of_body_shape", "document": "Body shape refers to the many physical attributes of the human body that make up its appearance, including size and countenance. Body shape has come to imply not only sexual/reproductive ability, but wellness and fitness. In the West, slenderness is associated with happiness, success, youth, and social acceptability. Being overweight is associated with laziness. The media promote a weight-conscious standard for women more often than for men. Deviance from these norms result in social consequences. The media perpetuate this ideal in various ways, particularly glorifying and focusing on thin actors and actresses, models, and other public figures while avoiding the use or image of overweight individuals. This thin ideal represents less than 5% of the American population.\n\nIt has been stated that the increase in eating disorders over the past several decades has coincided with an overall decrease (pound-wise) in women's ideal body weight portrayed by the mass media. A group of researchers examined the magazines Cosmopolitan, Glamour, Mademoiselle, and Vogue from 1959 to 1999. Fashion models became increasingly thinner during the 1980s and 1990s, making the thin ideal even more difficult for women to achieve. Photos depicting the models' entire bodies significantly increased in number from the 1960s to the 1990s. From 1995 to 1999 models were dressed in far more revealing outfits than they were from 1959 to 1963.\n\nWomen's magazines have been criticized for their conflicting messages, with an emphasis on food, cooking, child rearing, and entertaining. 75% of women's magazines contain at least one ad or article about how to alter one's appearance through cosmetic surgery, diet, or exercise. 25% of the women's magazines surveyed included tips for dieting or messages about weight loss. Many women's magazines focus on how to lead a better life by improving physical appearance. Megenta magazine released an article on \"How to dress for your body type\" giving tips and tricks to look the best in an outfit while striving to encourage women to feel comfortable in their skin. Men's magazines provide information about hobbies, activities, and entertainment in order for men to better their lives.\n\nMuch of the research about how the media affects body image examines the change in models and magazine articles over time. Garner, Garfinkel, Schwartz, and Thompson paid particular attention to the difference in body shape of Playboy centerfolds over a 20-year period. They found that over the years, the body mass, bust, and hip measurements decreased; however, the height increased. They also determined that the Playboy centerfolds were 13%-19% lower than the normal body weight for women of their age (Cusumano, Thompson 1997). Other studies found that over the years, magazines like Seventeen, YM, and Cosmopolitan all had an increase in articles about diet and exercise. Anderson and DiDomenico (1992) compared women's and men's popular magazines and found that diet and exercise articles appeared more than 10 times as much in women's magazines than men's.\n\nModeling and fashion industries have come under fire in recent years for embracing and promoting an ultra-thin appearance, giving \"unhealthy stigma\". According to a data research done by Might Goods using 3,000 models from 20 leading model agencies, 94% of the models are underweight. In addition, in a recent study conducted by Jennifer Brenner and Joseph Cunningham, it was observed that the majority of female models were underweight. The average American female fashion model begins working in the modeling business at ages 13\u201317 years old. The average female model in the United States weighs in ranging from 90 pounds to 120 pounds and an average of 5'8\" to 5'11\" tall. In comparison, according to the Center for Disease Control (CDC), the average weight of a female is 168.5 pounds, and the average height for females is 5'4\". According to the AMA (American Medical Association), thin models on the catwalk as well as social media and fashion photography lead to unrealistic body expectations, which in turn could lead to eating disorders and other emotional problems. With the mass advertisement promoting thin body, plastic surgery and cosmetic surgery, women and young girls are being bombarded with this very idea of achieving thin body. This issue is being control by some countries such as Israel and France in which they regulate the body mass indexes of models and let the public know whether the ad images is manipulated.\n\nPhotoshop is, \"the altering (a photographic image) with Photoshop software or other image-editing software especially in a way that distorts reality\". Aerie, the lingerie line for American Eagle, began the campaign Aerie Real in which models were no longer photoshopped. A 2016 study showed that some women showed a smaller decrease in body satisfaction when seeing the photos of women untouched in comparison to the greater decrease in body satisfaction when seeing previous photos that were retouched. Many well-known magazines have been called out for photoshopping, a few examples being AdWeek, InStyle, Modeliste Magazine, and Fashion Magazine. Celebrities have recently commented on changes that were made to their photos by such magazines. In 2015, Zendaya, a current 21-year-old actress, singer, and dancer, posted two pictures side by side of her magazine photoshoot calling out the changes that were made by the magazine, Modeliste Magazine. She stated, \"These are the things that make women self conscious, that create the unrealistic ideals of beauty that we have\".\n\nSocial media consists of websites like Twitter, Tumblr, Instagram, Pinterest and Facebook enabling users to produce and share content. Thinspiration images that promote the idealization of thinness and pro-eating disorder websites are becoming increasingly more prevalent throughout social media. Pro-eating disorder (i.e. pro-ana and pro-bulimia) websites are forms of social media where individuals can share advice and images that encourage their peers to engage in eating disorder behaviors. These websites have been shown to have deleterious effects because they communicate to the viewer that the thin ideal is something that is not only attainable but also necessary. Women are more likely to compare themselves online when they feel the need to improve their appearance. Women with low self-esteem are more likely to feel dissatisfied after comparing themselves to images on social and women who struggle with preexisting eating disorders may exacerbate them through social media-fueled body comparison. A study on college women in the US concluded that women who spent a significant amount of time on Facebook had increased body dissatisfaction.\n\nWhether positive or negative, other social media platforms have also shown to have an impact on their users. In an online experiment of US women, it was found that Pinterest users that followed fitness boards were \"more likely to engage in extreme weight-loss behaviors.\" It was also found that these boards promoted a positive correlation between social comparison, female ideal body typing, and extreme weight loss behaviors.", "doc_id": "8e77fc1e-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Differential_heat_treatment", "document": "Differential heat treatment (also called selective heat treatment or local heat treatment) is a technique used during heat treating to harden or soften certain areas of a steel object, creating a difference in hardness between these areas. There are many techniques for creating a difference in properties, but most can be defined as either differential hardening or differential tempering. These were common heat treating techniques used historically in Europe and Asia, with possibly the most widely known example being from Japanese swordsmithing. Some modern varieties were developed in the twentieth century as metallurgical knowledge and technology rapidly increased.\n\nDifferential hardening consists of either two methods. It can involve heating the metal evenly to a red-hot temperature and then cooling it at different rates, turning part of the object into very hard martensite while the rest cools slower and becomes softer pearlite. It may also consist of heating only a part of the object very quickly to red-hot and then rapidly cooling (quenching), turning only part of it into hard martensite but leaving the rest unchanged. Conversely, differential tempering methods consist of heating the object evenly to red-hot and then quenching the entire object, turning the whole thing into martensite. The object is then heated to a much lower temperature to soften it (tempering), but is only heated in a localized area, softening only a part of it.\n\nDifferential heat treatment is a method used to alter the properties of various parts of a steel object differently, producing areas that are harder or softer than others. This creates greater toughness in the parts of the object where it is needed, such as the tang or spine of a sword, but produces greater hardness at the edge or other areas where greater impact resistance, wear resistance, and strength is needed. Differential heat treatment can often make certain areas harder than could be allowed if the steel was uniformly treated, or \"through treated\". There are several techniques used to differentially heat treat steel, but they can usually be divided into differential hardening and differential tempering methods.\n\nDuring heat treating, when red-hot steel (usually between 1,500 \u00b0F (820 \u00b0C) and 1,600 \u00b0F (870 \u00b0C)) is quenched, it becomes very hard. However, it will be too hard, becoming very brittle like glass. Quenched-steel is usually heated again, slowly and evenly (usually between 400 \u00b0F (204 \u00b0C) and 650 \u00b0F (343 \u00b0C)) in a process called tempering, to soften the metal, thereby increasing the toughness. However, although this softening of the metal makes the blade less prone to breaking, it makes the edge more susceptible to deformation such as dulling, peening, or curling.\n\nDifferential hardening is a method used in heat treating swords and knives to increase the hardness of the edge without making the whole blade brittle. To achieve this, the edge is cooled faster than the spine by adding a heat insulator to the spine before quenching. Clay or another material is used for insulation. To prevent cracking and loss of surface carbon, quenching is usually performed before beveling, shaping, and sharpening the edge. It can also be achieved by carefully pouring water (perhaps already heated) onto the edge of a blade as is the case with the manufacture of some kukri. Differential hardening technology originated in China and later spread to Korea and Japan. This technique is mainly used in later Chinese jian, Chinese dao, and the katana, the traditional Japanese sword, and the khukuri, the traditional Nepalese knife. Most blades made with this technique have visible temper lines. Earlier Chinese jian from the ancient era (eg. Warring States to Han Dynasty) used tempering rather than differential heat treatment. This method is sometimes called differential tempering, but this term more accurately refers to a different technique, which originated with the broadswords of Europe.\n\nModern versions of differential hardening were developed when sources of rapidly heating the metal were devised, such as an oxy-acetylene torch or induction heating. With flame hardening and induction hardening techniques, the steel is quickly heated to red-hot in a localized area and then quenched. This hardens only part of the object, but leaves the rest unaltered.\n\nDifferential tempering was more commonly used to make cutting tools, although it was sometimes used on knives and swords as well. Differential tempering is obtained by quenching the sword uniformly, then tempering one part of it, such as the spine or the center portion of double edged blades. This is usually done with a torch or some other directed heat source. The heated portion of the metal is softened by this process, leaving the edge at the higher hardness.\n\nDifferential hardening (also called differential quenching, selective quenching, selective hardening, or local hardening) is most commonly used in bladesmithing to increase the toughness of a blade while keeping very high hardness and strength at the edge. This helps to make the blade very resistant to breaking, by making the spine very soft and bendable, but allows greater hardness at the edge than would be possible if the blade was uniformly quenched and tempered. This helps to create a tough blade that will maintain a very sharp, wear-resistant edge, even during rough use such as found in combat.\n\nAlthough differential hardening produces a very hard edge, it also leaves the rest of the sword rather soft, which can make it prone to bending under heavy loads, such as parrying a hard blow. It can also make the edge more susceptible to chipping or cracking. Swords of this type can usually only be resharpened a few times before reaching the softer metal underneath the edge. However, if properly protected and maintained, these blades can usually hold an edge for long periods of time, even after slicing through bone and flesh, or heavily matted bamboo to simulate cutting through body parts, as is in iaido.\n\nFlame hardening is often used to harden only a portion of an object, by quickly heating it with a very hot flame in a localized area, and then quenching the steel. This turns the heated portion into very hard martensite, but leaves the rest unchanged. Usually, an oxy-gas torch is used to provide such high temperatures. Flame hardening is a very common surface hardening technique, which is often used to provide a very wear-resistant surface. A common use is for hardening the surface of gears, making the teeth more resistant to erosion. The gear will usually be quenched and tempered to a specific hardness first, making a majority of the gear tough, and then the teeth are quickly heated and immediately quenched, hardening only the surface. Afterward, it may or may not be tempered again to achieve the final differential hardness.\n\nInduction hardening is a surface hardening technique which uses induction coils to provide a very rapid means of heating the metal. With induction heating, the steel can be heated very quickly to red-hot at the surface, before the heat can penetrate any distance into the metal. The surface is then quenched, hardening it, and is often used without further tempering. This makes the surface very resistant to wear, but provides tougher metal directly underneath it, leaving the majority of the object unchanged. A common use for induction hardening is for hardening the bearing surfaces, or \"journals\", on automotive crankshafts or the rods of hydraulic cylinders.", "doc_id": "8e77fd4a-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Kinston,_North_Carolina", "document": "Kinston is a city in Lenoir County, North Carolina, United States, with a population of 21,677 as of the 2010 census. It has been the county seat of Lenoir County since its formation in 1791. Kinston is located in the coastal plains region of eastern North Carolina.\n\nIn 2009, Kinston won the All-America City Award. This marks the second time in 21 years the city has won the title.\n\nDuring the onset of the Civil War, Camp Campbell and Camp Johnston were established near the city as training camps, and a bakery on Queen Street was converted to produce hardtack in large quantities. Also, a factory for the production of shoes for the military was located in Kinston. The Battle of Kinston took place in and around the city on December 14, 1862.\n\nFrom February 5 to February 22, 1864, 22 deserters were executed by hanging in the city. The court martial and subsequent hangings were carried out by the 54th Regiment, North Carolina Troops, Confederate States Army. Fifteen of these men were from Jones County and had all started their service in the 8th Battalion North Carolina Partisan Rangers.\n\nThe Battle of Wyse Fork, also known as the Battle of Southwest Creek (March 7\u201310, 1865) took place very near the city. At this later battle, the Confederate ram Neuse was scuttled to avoid capture by Union troops. Remnants of the ship have been salvaged, and were on display at Richard Caswell Park on West Vernon Avenue. A climate-controlled museum has been built on downtown Queen Street, and has moved the hulk there to prevent further deterioration of the original ship's remains. A full-scale replica vessel (Ram Neuse II) has been constructed near the original's resting place (known as the \"Cat's Hole\") beside the bank of the Neuse River on Heritage Street in Kinston. Union Army forces occupied the city following the battle. United States troops were assigned to the area through the Reconstruction era.\n\nDespite the hardships of war and Reconstruction, the population of the city continued to grow. By 1870, the population had increased to 1,100 people and grew to more than 1,700 within a decade.\n\nDuring the late 19th century, an expansion into new areas of industry occurred, most notably the production of horse-drawn carriages. Kinston also became a major tobacco and cotton trading center. By the start of the 20th century, more than 5 million lb of tobacco were being sold annually in Kinston's warehouses. Along with the growth in population and industry came a growth in property values. Some parcels increased in value more than fivefold within a 20-year period.\n\nOn April 6, 1916, Joseph Black was taken from the Lenoir County Jail and lynched by a mob of armed men. He was accused of assisting his son in an escape attempt.\n\nKinston is in the Atlantic coastal plain region of North Carolina. It is mainly on the northeast side of the Neuse River, and is northeast of the center of Lenoir County. It is 26 miles (42 km) east of Goldsboro, 30 miles (48 km) south of Greenville, and 35 miles (56 km) west of New Bern. The Atlantic Ocean at Emerald Isle is 57 miles (92 km) to the southeast, and Raleigh, the state capital, is 80 miles (130 km) to the northwest.\n\nAccording to the U.S. Census Bureau, the city of Kinston has a total area of 18.6 sq mi (48.1 km2), of which 0.2 sq mi (0.5 km2), or 0.95%, is covered by water.\n\nThe North Carolina Department of Public Safety (formerly the North Carolina Department of Juvenile Justice and Delinquency Prevention) operates the Dobbs Youth Development Center juvenile correctional facility in Kinston. The facility, which opened in 1944, has a prisoner capacity of 44.\n\nIn the 2017 municipal elections, Democratic candidate Dontario Hardy beat incumbent B.J. Murphy by a margin of 205 votes. City Councilman Robert A. Swinson IV was re-elected alongside newcomer Kristal Suggs, completing Kinston's first ever all African-American city council.\n\nAs with most of North Carolina, Kinston is predominantly Protestant with large concentrations of Baptists, Methodists, and various other evangelical groups. Episcopalians, Presbyterians, and Disciples of Christ also constitute a significant portion of the population.\n\nThe Roman Catholic community in Kinston has seen steady growth over the years with the migration of Hispanic workers to the area. Catholic migrants have also come from the Northeastern United States who work for the North Carolina Global TransPark and in nearby Greenville.\n\nKinston at one time had a sizeable Jewish community. As with most Jewish communities in the rural South, it has seen a steady decline. Temple Israel, Kinston's only synagogue, has only a few remaining members.", "doc_id": "8e77fe62-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Malayan_Union", "document": "The Malayan Union was a union of the Malay states and the Straits Settlements of Penang and Malacca. It was the successor to British Malaya and was conceived to unify the Malay Peninsula under a single government to simplify administration. Following opposition by the ethnic Malays, the union was reorganised as the Federation of Malaya in 1948.\n\nPrior to World War II, British Malaya consisted of three groups of polities: the protectorate of the Federated Malay States, five protected Unfederated Malay States and the crown colony of the Straits Settlements.\n\nOn 1 April 1946, the Malayan Union officially came into existence with Sir Edward Gent as its governor, combining the Federated Malay States, Unfederated Malay States and the Straits Settlements of Penang and Malacca under one administration. The capital of the Union was Kuala Lumpur. The former Straits Settlement of Singapore was administered as a separate crown colony.\n\nThe idea of the Union was first expressed by the British in October 1945 (plans had been presented to the War Cabinet as early as May 1944) in the aftermath of the Second World War by the British Military Administration. Sir Harold MacMichael was assigned the task of gathering the Malay state rulers' approval for the Malayan Union in the same month. In a short period of time, he managed to obtain all the Malay rulers\u2019 approval. The reasons for their agreement, despite the loss of political power that it entailed for the Malay rulers, has been much debated; the consensus appears to be that the main reasons were that as the Malay rulers were of course resident during the Japanese occupation, they were open to the accusation of collaboration, and that they were threatened with dethronement. Hence the approval was given, though it was with utmost reluctance.\n\nWhen it was unveiled, the Malayan Union gave equal rights to people who wished to apply for citizenship. It was automatically granted to people who were born in any state in British Malaya or Singapore and were living there before 15 February 1942, born outside British Malaya or the Straits Settlements only if their fathers were citizens of the Malayan Union and those who reached 18 years old and who had lived in British Malaya or Singapore \"10 out of 15 years before 15 February 1942\". The group of people eligible for application of citizenship had to live in Singapore or British Malaya \"for 5 out of 8 years preceding the application\", had to be of good character, understand and speak the English or Malay language and \"had to take an oath of allegiance to the Malayan Union\". However, the citizenship proposal was never actually implemented. Due to opposition to the citizenship proposal, it was postponed then modified, which made it harder for many Chinese and Indian residents to obtain Malayan citizenship.\n\nThe Sultans, the traditional rulers of the Malay states, conceded all their powers to the British Crown except in religious matters. The Malayan Union was placed under the jurisdiction of a British Governor, signalling the formal inauguration of British colonial rule in the Malay peninsula. Moreover, while the State Councils were still kept functioning in the former Federated Malay States, they lost the limited autonomy that they enjoyed, left to administer only some less important local aspects of government, and became an extended hand of the Federal government in Kuala Lumpur. Also, British Residents replacing the Sultans as the head of the State Councils meant that the political status of the Sultans was greatly reduced.\n\nThe Malays generally opposed the creation of the Union. The opposition was due to the methods Sir Harold MacMichael used to acquire the Sultans' approval, the reduction of the Sultans' powers, and easy granting of citizenship to immigrants. The United Malays National Organisation or UMNO, a Malay political association formed by Dato' Onn bin Ja'afar on 10 May 1946, led the opposition against the Malayan Union. Malays also wore white bands around their heads, signifying their mourning for the loss of the Sultans' political rights.\n\nAfter the inauguration of the Malayan Union, the Malays, under UMNO continued opposing the Malayan Union. They utilised civil disobedience as a means of protest by refusing to attend the installation ceremonies of the British governors. They had also refused to participate in the meetings of the Advisory Councils, hence Malay participation in the government bureaucracy and the political process had totally stopped. The British had recognised this problem and took measures to consider the opinions of the major races in Malaya before making amendments to the constitution. The Malayan Union was dissolved and replaced by the Federation of Malaya on 1 February 1948.", "doc_id": "8e77ff5c-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Digimon_Story:_Cyber_Sleuth", "document": "Digimon Story: Cyber Sleuth is a role-playing video game developed by Media.Vision and published by Bandai Namco Entertainment that was released in Japan on March 12, 2015 for PlayStation Vita and PlayStation 4. Part of the Digimon franchise, the game is the fifth installment in the Digimon Story series, following 2011's Super Xros Wars, and the first to be released on home consoles. The game would be released in North America on February 2, 2016, becoming the first installment of the Digimon Story series to be released in North America since 2007's Digimon World Dawn and Dusk, and the first to be released under its original title.\n\nA sequel, titled Digimon Story: Cyber Sleuth \u2013 Hacker's Memory, was released in Japan in 2017 and in Western territories in 2018. In July 2019, a port of the game and its sequel for Nintendo Switch and Windows, was announced for release on October 18, 2019, as Digimon Story Cyber Sleuth: Complete Edition, although the PC version was released a day early.\n\nDigimon Story: Cyber Sleuth is a role-playing game, played from a third-person perspective where players control a human character with the ability to command Digimon, digital creatures with their own unique abilities who do battle against other Digimon. Players can choose between either Palmon, Terriermon or Hagurumon as their starting partner at the beginning of the game, with more able to be obtained as they make their way into new areas. A total of 249 unique Digimon are featured, including seven that were available as DLC throughout the life of the game, and two which were exclusive to the Western release. The title features a New Game Plus mode where players retain all of their Digimon, non-key items, money, memory, sleuth rank, scan percentages, and Digifarm progress.\n\nPlayers assume the role of Takumi Aiba (male) or Ami Aiba (female), a young Japanese student living in Tokyo while their mother, a news reporter, is working abroad. After receiving a message from a hacker, Aiba investigates the physical-interaction cyberspace network EDEN, where they meet Nokia Shiramine and Arata Sanada. The hacker gives them \"Digimon Capture\" programs and locks them in EDEN. While searching for an exit, Aiba meets Yuugo, leader of the hacker team \"Zaxon\"; Yuugo teaches Aiba how to use their Digimon Capture and tells them that Arata is a skilled hacker himself. Aiba meets up with Nokia and Arata, who unlocks a way out, but the three are then attacked by a mysterious creature that grabs Aiba and corrupts their logout process.\n\nAiba emerges in the real world as a half-digitized entity and is rescued by detective Kyoko Kuremi, head of the Kuremi Detective Agency, which specializes in cyber-crimes. Aiba manifests an ability, Connect Jump, which allows them to travel into and through networks. Recognizing their utility, Kyoko helps Aiba stabilize their digital body and recruits them as her assistant. They investigate a hospital ward overseen by Kamishiro Enterprises, which owns and manages EDEN, and finds it filled with patients of a phenomenon called \"EDEN Syndrome,\" where users logged onto EDEN fall into a seemingly permanent coma. Aiba discovers their own physical body in the ward, before being confronted by a mysterious girl. The girl admits to knowing one of the other victims, and helps Aiba avoid Rie Kishibe, the current president of Kamishiro.\n\nThe mysterious girl approaches Kyoko and Aiba and reveals herself as Yuuko Kamishiro, the daughter of Kamishiro Industries' former president, and requests they investigate her father's purported suicide. With the assistance of Goro Mayatoshi, a detective in the Tokyo Police Department and an old friend of Kyoko's father, Kyoko and Aiba gather evidence regarding illegal activity within Kamishiro. Kyoko's plans are thwarted when Kishibe holds a sudden press conference, admitting to the activity and terminating several non-essential employees as scapegoats, which causes Mayatoshi's superiors to call off the accusations. Aiba, Arata, Yuuko, and Kyoko take advantage of an EDEN preview event to hack into the Kamishiro servers, learning of a \"Paradise Lost Plan,\" and that Yuuko's older brother is a victim of EDEN Syndrome, a casualty of a failed beta test eight years ago apparently covered up by Kamishiro.\n\nNokia, with Aiba's help, reunites with an Agumon and Gabumon she met and bonded with in Kowloon; she learns from them that Digimon are not hacker programs, but living creatures from a \"Digital World\", and that Agumon and Gabumon came to EDEN for a purpose they can't remember. Nokia vows to help them recover their memories, but is hampered by her lack of fighting experience; after being soundly defeated by Yuugo's lieutenant Fei, she resolves to become stronger and forms her own group, the Rebels, to improve relations between humans and Digimon. This allows Agumon and Gabumon to digivolve into WarGreymon and MetalGarurumon, and gains her a large following, but Yuugo worries that she might interfere with his goal of protecting EDEN.\n\nMeanwhile, Aiba assists Arata in investigating \"Digital Shift\" phenomena occurring around Tokyo. They meet Akemi Suedou, who identifies the creature behind the Digital Shifts as an Eater; a mass of corrupted data that consumes users' mental data, making it responsible for the EDEN Syndrome and Aiba's half-digital state. Eaters have links to a \"white boy ghost\" that keeps appearing around it, and by \"eating\" data can evolve into different forms. Arata, discouraged after witnessesing many friends become victims of EDEN Syndrome, decides to help Aiba upon learning the truth about their condition.\n\nAs Aiba continues their investigations, Jimiken \"Jimmy KEN,\" a Japanese rock idol and disgruntled Zaxon hacker, breaks away from Zaxon and forms a group called the \"Demons.\" Jimiken hijacks Tokyo's television signals, broadcasting a music video overlaid with subliminal messaging to hypnotize users into logging onto EDEN and entering the Demons' stronghold. Aiba defeats Jimiken, who reveals the signal hijacking equipment was given to him by Rie Kishibe in exchange for his loyalty, but his account is destroyed by Fei before he can be further interrogated.\n\nYuugo mobilizes hackers around EDEN to attempt a large-scale attack on Kamishiro Enterprise\u2019s high-security servers codenamed \"Valhalla.\" Arata intervenes, revealing he is the former leader of a hacker group that failed to hack the Valhalla server in the past, and initiates a battle between Yuugo's Zaxon hackers, his own group of veteran hackers, and Nokia's Rebels, supported by Aiba. The battle is interrupted when Rie unleashes Eaters in the server, revealing the entire event was a trap to get Yuugo to accumulate Eater prey, and forcibly logs Yuugo out, who is actually Yuuko using a false EDEN avatar modeled and named after her older brother. Rie informs Yuuko that she was using the avatar to manipulate her actions, and begins extracting Yuuko's memories.\n\nThe game holds a score of 75/100 on the review aggregator Metacritic, indicating generally favorable reviews. Digimon Story: Cyber Sleuth received a 34 out of 40 total score from Japanese magazine Weekly Famitsu, based on individual scores of 8, 9, 9, and 8.\n\nDestructoid felt that the game wasn't much of a departure from older role-playing games, stating \"The battle system is basically everything you've seen before from the past few decades of JRPGs,\" which includes random encounters that are \"either deliciously or inexcusably old-school, depending on your tastes.\" While PlayStation LifeStyle felt that the game \"isn\u2019t a perfect video game interpretation of Bandai Namco\u2019s long-running franchise,\" criticizing its linear dungeon design and \"cheap\" interface, its gameplay improvements were a step in the right direction \"for fans who have been waiting to see the series get on Pok\u00e9mon\u2019s level.\" The website also commended the colorful art and character design of Suzuhito Yasuda, declaring that \"Yasuda\u2019s art brings crucial style and life to Digimon\u2019s game series, which had spent previous years sort of fighting to establish its identity.\" Hardcore Gamer thought that the game was an important step forward for the franchise, stating \"It isn\u2019t perfect; its story and script could use some fine-tuning, and the world needs to be more interesting, but overall, this is a solid first step.\"\n\nThe PlayStation Vita version of Digimon Story: Cyber Sleuth sold 76,760 copies in its debut week in Japan, becoming the third high-selling title for the week. Although initial sales were less than its predecessor, Digimon World Re:Digitize, Cyber Sleuth managed to sell approximately 91.41% of all physical copies shipped to the region, and would go on to sell a total of 115,880 copies by the end of 2015, becoming the 58th best-selling software title that year. In the UK, Digimon Story: Cyber Sleuth was the 11th best selling game in the week of release. The PlayStation Vita version was the best selling digital title in North America and Europe. The game also has good performance among Latin American countries (#2 Brazil, #3 Mexico, #3 Argentina, #3 Chile, #3 Costa Rica, #4 Guatemala, #6 Per\u00fa, #9 Colombia) and the PlayStation 4 version was the 20th best selling digital title in North America and the 19th in Europe on the PlayStation Store in the month of its release in their respective categories. By May 2019, Cyber Sleuth had sold over 800,000 copies worldwide. By October 2020, Cyber Sleuth and Hacker's Memory had shipped more than 1.5 million units worldwide combined. The Switch port of Complete Edition sold 4,536 copies in its first week in Japan.", "doc_id": "8e780150-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Taiwan_Miracle", "document": "The Taiwan Miracle or Taiwan Economic Miracle refers to the rapid industrialization and economic growth of Taiwan during the latter half of the twentieth century. As it developed alongside Singapore, South Korea and Hong Kong, Taiwan became known as one of the \"Four Asian Tigers\".\n\nIn the 1970s, protectionism was on the rise, and the United Nations switched recognition from the government of the Republic of China to the government of the People's Republic of China as the sole legitimate representative of mainland China. It was expelled by General Assembly Resolution 2758 and replaced in all UN organs with the PRC. The Kuomintang began a process of enhancement and modernization of the industry, mainly in high technology (such as microelectronics, personal computers and peripherals). One of the biggest and most successful Technology Parks was built in Hsinchu, near Taipei.\n\nMany Taiwanese brands became important suppliers of worldwide known firms such as DEC or IBM, while others established branches in Silicon Valley and other places inside the United States and became known. The government also recommended the textile and clothing industries to enhance the quality and value of their products to avoid restrictive import quotas, usually measured in volume. The decade also saw the beginnings of a genuinely independent union movement after decades of repression. Some significant events occurred in 1977, which gave the new unions a boost.\n\nOne was the formation of an independent union at the Far East Textile Company after a two-year effort discredited the former management-controlled union. This was the first union that existed independently of the Kuomintang in Taiwan's post-war history (although the Kuomintang retained a minority membership on its committee). Rather than prevailing upon the state to use martial law to smash the union, the management adopted the more cautious approach of buying workers' votes at election times. However, such attempts repeatedly failed and, by 1986, all of the elected leaders were genuine unionists. Another, and, historically, the most important, was the now called \"Zhongli incident\".\n\nIn the 1980s, Taiwan had become an economic power, with a mature and diversified economy, solid presence in international markets and huge foreign exchange reserves. Its companies were able to go abroad, internationalize their production, investing massively in Asia (mainly in People's Republic of China) and in other Organisation for Economic Co-operation and Development countries, mainly in the United States.\n\nHigher salaries and better organized trade unions in Taiwan, together with the reduction of the Taiwanese export quotas meant that the bigger Taiwanese companies moved their production to China and Southeast Asia. The civil society in a now developed country, wanted democracy, and the rejection of the KMT dictatorship grew larger. A major step occurred when Lee Teng-hui, a native from Taiwan, became President, and the KMT started a new path searching for democratic legitimacy.\n\nTwo aspects must be remembered: the KMT was on the center of the structure and controlled the process, and that the structure was a net made of relations between the enterprises, between the enterprises and the State, between the enterprises and the global market thanks to trade companies and the international economic exchanges. Native Taiwanese were largely excluded from the mainlanders dominated government, so many went into the business world.\n\nEconomic growth has become much more modest since the late 1990s. A key factor to understand this new environment is the rise of China, offering the same conditions that made possible, 40 years ago, the Taiwan Miracle (a quiet political and social environment, cheap and educated workers, absence of independent trade unions). To keep growing, the Taiwanese economy must abandon its workforce intensive industries, which cannot compete with China, Vietnam or other sub-developed countries, and keep innovating and investing in information technology. Since the 1990s, Taiwanese companies have been permitted to invest in China, and a growing number of Taiwanese businessmen are demanding easier communications between the two sides of the Taiwan Strait.\n\nOne major difference with Taiwan is the focus on English education. Mirroring Hong Kong and Singapore, the ultimate goal is to become a country fluent in three languages (Taiwanese; Mandarin, the national language of China, and Taiwan; and English, becoming a bridge between East and West).\n\nAccording to western financial markets, consolidation of the financial sector remains a concern as it continues at a slow pace, with the market split so small that no bank controls more than 10% of the market, and the Taiwanese government is obligated, by the WTO accession treaty, to open this sector between 2005 and 2008.\n\nDebate on opening \"Three Links\" with the People's Republic of China were completed in 2008, with the security risk of economic dependence on PR China being the biggest barrier. By decreasing transportation costs, it was hoped that more money will be repatriated to Taiwan and that businesses will be able to keep operations centers in Taiwan while moving manufacturing and other facilities to mainland China.\n\nA law forbidding any firm investing in the PR China more than 40% of its total assets on the mainland was dropped in June 2008, when the new Kuomintang government relaxed the rules to invest in the PR China. Dialogue through semi-official organisations (the SEF and the ARATS) reopened on 12 June 2008 on the basis of the 1992 Consensus, with the first meeting held in Beijing. Taiwan hopes to become a major operations center in East Asia.", "doc_id": "8e78024a-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Bridge_to_Terabithia_(2007_film)", "document": "Bridge to Terabithia is a 2007 American fantasy drama film directed by G\u00e1bor Csup\u00f3 and written by David L. Paterson and Jeff Stockwell. It is based on the 1977 novel of the same name by Katherine Paterson. The film stars Josh Hutcherson, AnnaSophia Robb, Bailee Madison, Zooey Deschanel, and Robert Patrick. It follows two 12-year-old neighbors who create a fantasy world called Terabithia to cope with reality and spend their free time together in an abandoned tree house.\n\nThe original novel was based on events from the childhood of the author's son, screenwriter David Paterson. When he asked his mother if he could write a screenplay of the novel, she agreed in part because of his ability as a playwright. Produced by Walden Media, principal photography was shot in Auckland, New Zealand, within 60 days. Film editing took ten weeks, while post-production, music mixing, and visual effects took several months, with the film fully completed by November 2006. This was Michael Chapman's last film as cinematographer before his retirement and eventual death in 2020.\n\nBridge to Terabithia was theatrically released in the United States on February 16, 2007 by Walt Disney Pictures. The film received positive reviews from critics who praised the visuals, performances and faithfulness to the source material. It was a box-office success, grossing $137.6 million worldwide against its budget of $20\u201325 million. At the 29th Young Artist Awards, the film won all five awards for which it was nominated.\n\nesse \"Jess\" Aarons is a 12-year-old aspiring artist living with his financially struggling family in Lark Creek. He rides the bus to school with his younger sister May Belle where he avoids the school bully Janice Avery. In class, Jess is also bullied by classmates Scott Hoager and Gary Fulcher and meets a new student named Leslie Burke. At recess, Jess enters a running event, for which he has been training at home. Leslie also enters and manages to win, much to Jess' irritation. On the way home, Jess and Leslie learn they are next-door neighbors.\n\nLater that day, it is discovered that Jess has a difficult relationship with his father, who spends more time with May Belle. Due to their financial struggles, his mother also forces him to wear his older sister's sneakers. One day at school, Leslie compliments Jess' drawing ability and they become friends.\n\nAfter school, they venture into the woods and swing across a creek on a rope. Jess and Leslie find an abandoned treehouse on the other side and invent a new world, which they call Terabithia. For the next few days, Jess and Leslie spend their free time in the treehouse getting to know each other.\n\nLeslie gives Jess an art kit on his birthday. Jess becomes angry with his father for his attitude towards him and he loses his belief in Terabithia, and refuses its existence the next day at school. Later, Jess apologizes to Leslie by giving her a puppy, whom she names Prince Terrien (P.T).\n\nOnce in Terabithia, they encounter various creatures, including a giant troll resembling Janice, squirrel-like creatures resembling Hoager called \"Squogers\", and \"Hairy Vultures\" resembling Fulcher.\n\nAt school, Leslie becomes frustrated by Janice Avery's bullying. Jess and Leslie play a prank on Janice and she is embarrassed in front of everyone on the bus. Leslie introduces Jess to her parents and they help paint their house. At school, Leslie discovers from Janice that her bullying is due to her abusive father, and the two become friends, with Janice later befriending Jess as well. Jess and Leslie take P.T to Terabithia, where they fight off the Dark Master's creatures resembling their bullies, this time with the troll as their ally.\n\nThe next morning, Ms. Edmunds, the music teacher who Jess has a crush on, calls to invite him on a one-on-one field trip to an art museum. When Jess returns home, his father reveals that Leslie died after hitting her head in the creek when the rope she used snapped. Jess first denies it and runs to check on Leslie, but he notices the severed rope as well as emergency vehicles surrounding her house before eventually accepting her death.\n\nThe following day, Jess and his parents visit the Burke family to pay their respects. Leslie's father Bill tells Jess she loved him and thanks him for being the best friend she ever had since she never had friends at her old school. Jess feels overwhelming guilt for Leslie's death, lashing out at both Hoager and May Belle, and imagining the Dark Master from Terabithia chasing after him before breaking down into tears, but his father comforts and consoles him. Jess says Leslie is gone forever, but his father tells him she will never be as long as he keeps her memory alive.\n\nJess decides to re-imagine Terabithia and builds a bridge across the river to welcome a new ruler. He invites May Belle to Terabithia and the siblings agree to rule together, with Jess as king and May Belle as the princess.\n\nCsup\u00f3 explained that \"it was a very conscious decision from the very beginning that we're not going to overdo the visual effects because of the story's integrity and the book's integrity\", because there was only a brief mention of Jess and Leslie fighting imaginary creatures in the forest in the novel. With that in mind, they \"tried to do the absolute minimum, which would be required to put it into a movie version\".[5]\n\nIn designing the fantasy creatures found in Terabithia, Csup\u00f3 wanted to make creatures that were \"little more artsy, imaginative, fantastical creatures than the typical rendered characters you see in other movies\", and drew inspiration from Terry Gilliam and Ridley Scott. Dima Malanitchev came up with the drawings for the creatures with Csup\u00f3's guidance. Csup\u00f3 chose to have Weta Digital render the 3D animation because he \"was impressed with their artistic integrity, the teamwork, the fact that people were really nice, and also they responded to our designs very positively\". Weta modified some of the creature designs, but ultimately remained faithful to Csup\u00f3's original designs.\n\nThere were around 100 crew members from Weta working on the effects for the film. Weta was already working on animating the creatures while the film was being shot, and Weta crew members were on-set for all the scenes that involved special effects during the filming. Weta visual effects supervisor Matt Aitken explained that process involved in interpreting the creatures was \"split into two steps\". First, natural-looking creatures were created based on pencil sketches by Csup\u00f3 and Malanitchev, and this was done mostly through Photoshop collages done by visual effects art director Michael Pangrazio. The second step was to figure out animation or motion styles that best suited these creatures.\n\nLeslie's costumes in the film were designed to look as if the character \"might have made some of them herself\", and they were updated from those described in the book to reflect what would currently be considered eccentric.", "doc_id": "8e7803d0-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Yasujir\u014d_Ozu", "document": "Yasujir\u014d Ozu (12 December 1903 \u2013 12 December 1963) was a Japanese film director and screenwriter. He began his career during the era of silent films, and his last films were made in colour in the early 1960s. Ozu first made a number of short comedies, before turning to more serious themes in the 1930s. The most prominent themes of Ozu's work are marriage and family, especially the relationships between generations. His most widely beloved films include Late Spring (1949), Tokyo Story (1953), and An Autumn Afternoon (1962).\n\nWidely regarded as one of the world's greatest and most influential filmmakers, Ozu's work has continued to receive acclaim since his death. In the 2012 Sight & Sound poll, Ozu's Tokyo Story was voted the third-greatest film of all time by critics world-wide. In the same poll, Tokyo Story was voted the greatest film of all time by 358 directors and film-makers world-wide.\n\nWith his uncle acting as intermediary, Ozu was hired by the Shochiku Film Company, as an assistant in the cinematography department, on 1 August 1923, against the wishes of his father. His family home was destroyed in the earthquake of 1923, but no members of his family were injured.\n\nOn 12 December 1924, Ozu started a year of military service. He finished his military service on 30 November 1925, leaving as a corporal.\n\nIn 1926, he became a third assistant director at Shochiku. In 1927, he was involved in a fracas where he punched another employee for jumping a queue at the studio cafeteria, and when called to the studio director's office, used it as an opportunity to present a film script he had written. In September 1927, he was promoted to director in the jidaigeki (period film) department, and directed his first film, Sword of Penitence, which has since been lost. Sword of Penitence was written by Ozu, with a screenplay by Kogo Noda, who would become his co-writer for the rest of his career. On September 25, he was called up for service in the military reserves until November, which meant that the film had to be partly finished by another director.\n\nIn 1928, Shiro Kido, the head of the Shochiku studio, decided that the company would concentrate on making short comedy films without star actors. Ozu made many of these films. The film Body Beautiful, released on 1 December 1928, was the first Ozu film to use a low camera position, which would become his trademark. After a series of the \"no star\" pictures, in September 1929, Ozu's first film with stars, I graduated But..., starring Minoru Takada and Kinuyo Tanaka, was released. In January 1930, he was entrusted with Shochiku's top star, Sumiko Kurishima, in her new year film, An Introduction to Marriage. His subsequent films of 1930 impressed Shiro Kido enough to invite Ozu on a trip to a hot spring. In his early works, Ozu used the pseudonym \"James Maki\" for his screenwriting credit. His film Young Miss, with an all-star cast, was the first time he used the pen name James Maki, and was also his first film to appear in film magazine Kinema Jumpo's \"Best Ten\" at third position.\n\nIn 1932, his I Was Born, But..., a comedy about childhood with serious overtones, was received by movie critics as the first notable work of social criticism in Japanese cinema, winning Ozu wide acclaim. In 1935 Ozu made a short documentary with soundtrack: Kagami Jishi, in which Kikugoro VI performed a Kabuki dance of the same title. This was made by request of the Ministry of Education.. 221\u200a Like the rest of Japan's cinema industry, Ozu was slow to switch to the production of talkies: his first film with a dialogue sound-track was The Only Son in 1936, five years after Japan's first talking film, Heinosuke Gosho's The Neighbor's Wife and Mine.\n\nOn 9 September 1937, at a time when Shochiku was unhappy about Ozu's lack of box-office success, despite the praise he received from critics, the thirty-four-year-old Ozu was conscripted into the Imperial Japanese Army. He spent two years in China in the Second Sino-Japanese War. He arrived in Shanghai on 27 September 1937 as part of an infantry regiment which handled chemical weapons. He started as a corporal but was promoted to sergeant on 1 June 1938. From January until September 1938 he was stationed in Nanjing, where he met Sadao Yamanaka, who was stationed nearby. In September, Yamanaka died of illness. In 1939, Ozu was dispatched to Hankou, where he fought in the Battle of Nanchang and the Battle of Xiushui River. In June, he was ordered back to Japan, arriving in Kobe in July, and his conscription ended on 16 July 1939.\n\nSome of Ozu\u2019s published diaries cover his wartime experiences between December 20, 1938, to June 5, 1939. Another diary from his wartime years, he expressly forbade from publication. In the published diaries, reference to his group\u2019s participation in chemical warfare (in violation of the Geneva Protocol, though Japan had withdrawn from the League of Nations in 1933) can be found, for example, in various entries from March, 1939. In other entries, he describes Chinese soldiers in disparaging terms, likening them in one passage to insects Although operating as a military squad leader, Ozu retains his directorial perspective, once commenting that the initial shock and subsequent agony of a man as he is hacked to death is very much like its depiction in period films.\n\nOzu returned to Japan in February 1946, and moved back in with his mother, who had been staying with his sister in Noda in Chiba prefecture. He reported for work at the Ofuna studios on 18 February 1946. His first film released after the war was Record of a Tenement Gentleman in 1947. Around this time, the Chigasakikan Ryokan became Ozu's favoured location for scriptwriting.\n\nTokyo Story was the last script that Ozu wrote at Chigasakikan. In later years, Ozu and Noda used a small house in the mountains at Tateshina in Nagano Prefecture called Unkos\u014d to write scripts, with Ozu staying in a nearby house called Mugeis\u014d.\n\nOzu's films from the late 1940s onward were favourably received, and the entries in the so-called \"Noriko trilogy\" (starring Setsuko Hara) of Late Spring (1949), Early Summer (1951), and Tokyo Story (1953) are among his most acclaimed works, with Tokyo Story widely considered his masterpiece. Late Spring, the first of these films, was the beginning of Ozu's commercial success and the development of his cinematography and storytelling style. These three films were followed by his first colour film, Equinox Flower, in 1958, Floating Weeds in 1959, and Late Autumn in 1960. In addition to Noda, other regular collaborators included cinematographer Yuharu Atsuta, along with the actors Chish\u016b Ry\u016b, Setsuko Hara, and Haruko Sugimura.", "doc_id": "8e780524-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Tyrone_Barnett", "document": "Tyrone Barnett (born 28 October 1985) is an English professional footballer who plays as a forward for National League North club Hereford. A former West Bromwich Albion youth team player, the self-described 'athletic target man' dropped into non-league football with Rushall Olympic, AFC Telford United, Willenhall Town, and Hednesford Town. He won the Birmingham Senior Cup with Hednesford, before his goal scoring record won him a move to Football League club Macclesfield Town in May 2010. He was voted the club's Player of the Year for 2010\u201311, and won a move to Crawley Town in June 2011. He was named on the 2011\u201312 League Two PFA Team of the Year, and joined Peterborough United on loan in February 2012, which was made into a permanent move at the end of the season for \u00a31.1 million.\n\nHe struggled with injuries and discipline issues in the 2012\u201313 season, and was loaned out to Ipswich Town in November 2012. The 2013\u201314 season was more positive, though he was still allowed to join Bristol City on a four-month loan in January 2014. He joined Oxford United on loan in September 2014, before he was sold on to Shrewsbury Town in February 2015. He helped Shrewsbury to win promotion out of League Two at the end of the 2014\u201315 campaign, before he was loaned out to Southend United in January 2016. His contract with Shrewsbury was cancelled in August 2016, and he subsequently signed with AFC Wimbledon. He switched to Port Vale in July 2017, and moved on to Cheltenham Town in January 2019 after a successful loan spell. He joined non-League Eastleigh in August 2019, before moving to Hereford in June 2022.\n\nBarnett began his career as a youth player with West Bromwich Albion but left The Hawthorns after not being offered a professional contract. He instead moved to Rushall Olympic, making his Southern League Division One South & West debut at the age of 19 in a 3\u20131 defeat to Clevedon Town. He ended the 2005\u201306 season with nine goals in 41 appearances. His form at the start of the 2006\u201307 season, scoring nine goals in 16 appearances in all competitions, led Northern Premier League Premier Division side AFC Telford United to sign Barnett in October 2006 in exchange for Dean Perrow joining Rushall on a one-month loan deal. However, he fell out of favour at the New Bucks Head and left to join Willenhall Town at the end of the season.\n\nHe spent the 2007\u201308 season with the \"Lockmen\", finishing as the club's top scorer with 22 goals, before he joined Hednesford Town. He scored 28 goals in all competitions during his first season, helping the \"Pitmen\" to win the Birmingham Senior Cup for the first time in 73 years with a 2\u20130 win over Stourbridge He scored 26 goals in 53 games across the 2009\u201310 campaign to help Hednesford qualify for the Southern Premier Division play-offs. Before turning professional he had jobs working for the Halifax bank, British Car Auctions, as a security guard in job centres and as a delivery driver for a waste management company.\n\nIn May 2010, Barnett moved to League Two side Macclesfield Town, joining former Hednesford teammate Ross Draper at Moss Rose. He made a goalscoring debut on the opening day of the 2010\u201311 season in a 2\u20132 draw at Stevenage. He initially signed a one-year deal with an option for a second; the option was quickly taken up by the \"Silkmen\" after he enjoyed a successful start to his league career. However manager Gary Simpson admitted that \"if he keeps playing like that we'll struggle to keep hold of him\". Barnett was voted by Macclesfield Town supporters their Player of the Year for 2010\u201311 after scoring 13 goals in 51 matches.\n\nBarnett signed for newly promoted League Two side Crawley Town for an undisclosed fee in June 2011. He scored the \"Red Devils\" first ever Football League goal on 6 August, opening the scoring in a 2\u20132 draw with Port Vale at Vale Park. Barnett and Crawley manager Steve Evans were nominated for Player of the Month for August and League Two Manager of the Month for August respectively, but lost to Mark Arber and Andy Scott. Barnett scored his first brace for the club against Bradford City on 16 September. His brace against Bradford attracted interest from other clubs. He went on to score a total of 14 goals in 33 games for the club in the first half of the 2011\u201312 season. Blackpool had a bid accepted for Barnett, but the move fell through over personal terms. Despite leaving Crawley Town, Barnett was nominated for the League Two Player of the Year, but lost out to Matt Ritchie. He was also named on the PFA Team of the Year for League Two.\n\nBarnett describes himself as an 'athletic target man' forward, able to use his athleticism to get on the end of a cross and his strength to hold the ball up. In February 2012, Peterborough United manager Darren Ferguson said that Barnett was \"technically very good, has decent pace and he is one of the best strikers in the air that I have seen for a while\".", "doc_id": "8e780664-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Gotthold_Ephraim_Lessing", "document": "Gotthold Ephraim Lessing (22 January 1729 \u2013 15 February 1781) was a German writer, philosopher, dramatist, publicist and art critic, and a representative of the Enlightenment era. His plays and theoretical writings substantially influenced the development of German literature. He is widely considered by theatre historians to be the first dramaturg in his role at Abel Seyler's Hamburg National Theatre.\n\nLessing was born in Kamenz, a small town in Saxony, to Johann Gottfried Lessing and Justine Salome Feller. His father was a Lutheran minister and wrote on theology. Young Lessing studied at the Latin School in Kamenz from 1737 to 1741. With a father who wanted his son to follow in his footsteps, Lessing next attended the F\u00fcrstenschule St. Afra in Meissen. After completing his education at St. Afra's, he enrolled at the University of Leipzig where he pursued a degree in theology, medicine, philosophy, and philology (1746\u20131748).[2]\n\nIt was here that his relationship with Karoline Neuber, a famous German actress, began. He translated several French plays for her, and his interest in theatre grew. During this time, he wrote his first play, The Young Scholar. Neuber eventually produced the play in 1748.\n\nFrom 1748 to 1760, Lessing lived in Leipzig and Berlin. He began to work as a reviewer and editor for the Vossische Zeitung and other periodicals. Lessing formed a close connection with his cousin, Christlob Mylius, and decided to follow him to Berlin. In 1750, Lessing and Mylius teamed together to begin a periodical publication named Beitr\u00e4ge zur Historie und Aufnahme des Theaters. The publication ran only four issues, but it caught the public's eye and revealed Lessing to be a serious critic and theorist of drama.\n\nIn 1752, he took his master's degree in Wittenberg. From 1760 to 1765, he worked in Breslau (now Wroc\u0142aw) as secretary to General Tauentzien during the Seven Years' War between Britain and France, which had effects in Europe. It was during this time that he wrote his famous Laoco\u00f6n, or the Limitations of Poetry\n\nIn 1765, Lessing returned to Berlin, leaving in 1767 to work for three years at the Hamburg National Theatre. Actor-manager Konrad Ackermann began construction of Germany's first permanent national theatre in Hamburg, established by Johann Friedrich L\u00f6wen. The owners of the new Hamburg National Theatre hired Lessing as the theatre's critic of plays and acting, an activity later known as dramaturgy (based on his own words), making Lessing the very first dramaturge. The theatre's main backer was Abel Seyler, a former currency speculator who since became known as \"the leading patron of German theatre.\" There he met Eva K\u00f6nig, his future wife. His work in Hamburg formed the basis of his pioneering work on drama, titled Hamburgische Dramaturgie. Unfortunately, because of financial losses due to pirated editions of the Hamburgische Dramaturgie, the Hamburg Theatre closed just three years later.\n\nLessing was also famous for his friendship with Jewish-German philosopher Moses Mendelssohn. A recent biography of Mendelssohn's grandson, Felix, describes their friendship as one of the most \"illuminating metaphors for the clarion call of the Enlightenment for religious tolerance\". It was this relationship that sparked his interest in popular religious debates of the time. He began publishing heated pamphlets on his beliefs which were eventually banned. It was this banishment that inspired him to return to theatre to portray his views and to write Nathan the Wise.\n\nEarly in his life, Lessing showed interest in the theatre. In his theoretical and critical writings on the subject\u2014as in his own plays\u2014he tried to contribute to the development of a new type of theatre in Germany. With this he especially turned against the then predominant literary theory of Gottsched and his followers. Lessing's Hamburgische Dramaturgie ran critiques of plays that were performed in the Hamburg Theatre, but after dealing with dissatisfied actors and actresses, Lessing redirected his writings to more of an analysis on the proper uses of drama. Lessing advocated the outline of drama in Aristotle's Poetics. He believed the French Academy had devalued the uses of drama through their neoclassical rules of form and separation of genres. His repeated opinions on this issue influenced theatre practitioners who began the movement of rejecting theatre rules known as Sturm und Drang (\"Storm and Stress\"). He also supported serious reception of Shakespeare's works. He worked with many theatre groups (e.g. the one of the Neuberin).\n\nIn Hamburg he tried with others to set up the German National Theatre. Today his own works appear as prototypes of the later developed bourgeois German drama. Scholars see Miss Sara Sampson and Emilia Galotti as amongst the first bourgeois tragedies, Minna von Barnhelm (Minna of Barnhelm) as the model for many classic German comedies, Nathan the Wise (Nathan der Weise) as the first German drama of ideas (\"Ideendrama\"). His theoretical writings Laoco\u00f6n and Hamburg Dramaturgy (Hamburgische Dramaturgie) set the standards for the discussion of aesthetic and literary theoretical principles. Lessing advocated that dramaturgs should carry their work out working directly with theatre companies rather than in isolation.\n\nIn his religious and philosophical writings he defended the faithful Christian's right for freedom of thought. He argued against the belief in revelation and the holding on to a literal interpretation of the Bible by the predominant orthodox doctrine through a problem later to be called Lessing's Ditch. Lessing outlined the concept of the religious \"Proof of Power\": How can miracles continue to be used as a base for Christianity when we have no proof of miracles? Historical truths which are in doubt cannot be used to prove metaphysical truths (such as God's existence). As Lessing says it: \"That, then, is the ugly great ditch which I cannot cross, however often and however earnestly I have tried to make that leap.\"", "doc_id": "8e7807e0-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/The_Late_Late_Show_(American_talk_show)", "document": "The Late Late Show is an American late-night television talk and variety comedy show on CBS. It first aired in January 1995, with host Tom Snyder, who was followed by Craig Kilborn, Craig Ferguson, and current host James Corden. The show originates from Television City in Los Angeles.\n\nThe show differed from most late-night talk shows during its first two decades on air in that it did not use a house band or an in-studio announcer. The traditional opening monologue also tended to be different from that of other late night shows tending to avoid jokes with punch lines during Snyder and Ferguson's tenures in favor of a short conversational introduction when Snyder was host and a cold opening featuring either a musical parody, audience interaction, a short sketch or interaction between Ferguson and Geoff Peterson followed by an anecdotal stream of consciousness introduction during most of Ferguson's years. While Craig Kilborn opened with a monologue it tended to be shorter than that used by other late shows. Corden's approach to the monologue has been a hybrid of topical punchline jokes and a stream of consciousness, although it is usually very short, as the show tends to favor longer recorded sections.\n\nWhile most late-night talk shows in the United States feature multiple guests individually, James Corden typically has all of his guests on at the same time in a similar fashion to most British talk shows.\n\nTom Snyder hosted the program from its inception in January 1995 until March 1999. The choice of Snyder as host was made by David Letterman, whose contract with CBS gave him (via production company Worldwide Pants) the power to produce the show in the timeslot immediately after his own program and who had an affinity for Snyder, whose NBC late night series Tomorrow had been succeeded by Late Night with David Letterman. The time slot on CBS previously carried repeats of Crimetime After Primetime. Snyder departed CNBC to host the Late Late Show on CBS.\n\nLetterman and Snyder had a long history together: a 1978 Tomorrow episode hosted by Snyder was almost exclusively devoted to a long interview with up-and-coming new comedy talents Letterman, Billy Crystal and Merrill Markoe. And in 1982, when Tomorrow was canceled by NBC, Letterman's series Late Night with David Letterman succeeded Tomorrow in the timeslot, and Snyder had been offered but refused a move to after Late Night by NBC.\n\nWhen Snyder announced he was leaving, the show was reformatted to resemble Letterman and other major late-night talk programs. Craig Kilborn took over in March 1999, having left The Daily Show (Where he was succeeded by Jon Stewart) to become the new Late Late Show host (previously he was an anchor on ESPN's SportsCenter).\n\nWhen Kilborn was on the show, it began with an image of a full moon wavering behind gray stratus clouds, to the tuning of an orchestra, while the announcer\u2014the recorded, modulated voice of Kilborn himself\u2014blurted out, \"From the gorgeous, gorgeous Hollywood Hills in sunny California, it's your Late Late Show with Craig Kilborn. Tonight,\" and then the guests were announced, backed by the show's theme song, composed by Neil Finn. Then Kilborn was presented, \"Ladies and gentlemen, *pause* Mister Craig Kilborn\", with the 1970s disco band Wild Cherry song \"Play That Funky Music\".\n\nUnder Craig Ferguson's tenure as host, the show started with a cold open, followed by opening credits and a commercial break. A loose comic monologue then followed, consistently including a greeting (\"Welcome to Los Angeles, California, welcome to the Late Late Show, I am your host, TV's Craig Ferguson\") and the proclamation that \"It's a great day for America, everybody!\".\n\nFrom 2010 the monologue also included banter with Geoff Peterson, his \"robot skeleton sidekick\", voiced and controlled by Josh Robert Thompson. This animatronic was constructed by the MythBusters' Grant Imahara but went through many revisions, the most important was the regular live control and voicing by Thompson. This changed the dynamic of the show as Ferguson had a recurring 'sidekick' to banter with.\n\nOn September 8, 2014, CBS announced that James Corden would succeed Ferguson as host on March 23, 2015. His show, originally slated to premiere on March 9, 2015, CBS pushed back its premiere to March 23, 2015, in December 2014, in order to use the NCAA basketball tournament as a means of promoting Corden's debut, and prevent a situation where two episodes would be pre-empted during the first week of the tournament. Corden's hosting tenure is the first to have a house band (the lack thereof having been a running joke during Ferguson's tenure); Reggie Watts serves as the franchise's first bandleader.\n\nIn keeping with customs employed on British chat shows, Corden interviews all of the nightly guests at once, opting for a more conversational style. He also eschews sitting behind the set's desk during the interview portion of the show, using it only for comedy bits and direct addresses to the audience. Corden's version of the show also originates from Studio 56 on a set that includes a bar. His segment on \"Carpool Karaoke\" where stars sing their songs in cars became highly popular online and clips of the show became popular videos.", "doc_id": "8e7808f8-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Republican_Party_of_Virginia", "document": "The party was established in 1854 by opponents of slavery and secession in the commonwealth, with the newly-chartered state chapter first sending its own among over 600 delegates to the 1856 Republican National Convention. However, the Virginia delegates ultimately abstained from casting ballots for president, instead casting ballots for William L. Dayton for vice-president; both candidates were defeated in the general by Democrats James Buchanan and John C. Breckinridge. Virginia's delegates to the 1860 convention were initially split between a majority for Abraham Lincoln and a minority for Simon Cameron in the first and second ballots, with delegates settling on Lincoln on the third ballot for president, and voted for Cassius M. Clay for vice-president on the first and second ballots (with Clay being defeated in the convention by Hannibal Hamlin). While John Bell and Edward Everett of the Constitutional Union Party, a party of conservative former Whigs who supported slavery but opposed secession, carried Virginia, Tennessee and Kentucky, Lincoln and Hamlin eventually won the presidential election.\n\nVirginia Republicans were active in fighting for the Union side in the American Civil War, and helped lead the formation of the Restored Government of Virginia as well as the secession of what became the state of West Virginia. Republicans Francis Harrison Pierpont and Daniel Polsley were respectively elected the governor and lieutenant governor of the Restored Government, with Pierpont eventually taking power as the de facto governor of Virginia after the previous Democratic governor William Smith was removed from office and arrested. Two more Republicans would hold office for governor, Henry H. Wells and Gilbert Carlton Walker.\n\nRepublican fortunes turned downward as the Redeemer movement gathered apace and the Reconstruction era ended. A brief upturn occurred when William Mahone formed the Readjuster Party, a bi-racial populist coalition of Democrats and Republicans which held its height of power from 1870 to 1883. After the Virginia Constitutional Convention of 1902, which drafted and promulgated a new constitution which disfranchised almost all African-Americans in the commonwealth, the Republican Party ceased to be an effective political party in Virginia.\n\nThe party reached its nadir of representation in the General Assembly, reaching handfuls of representation in either chamber and in the U.S. House until after 1964. Historically, from the late 19th into the mid-20th centuries, the 9th and 2nd congressional districts were the friendliest terrain for Republicans in the state (and some of the friendliest in the former Confederacy), encompassing areas which border West Virginia. Virginia Republicans managed to help Herbert Hoover and Charles Curtis win the 1928 election, but would only regain their statewide competitiveness after Dwight Eisenhower carried the state in 1952. Linwood Holton would be elected in 1969 as the first Republican governor of Virginia in the 20th century, inaugurating an era of competitive elections between the two major parties.\n\nKate Obenshain Griffin of Winchester became the party's chairman in 2004. Following Senator George Allen's unsuccessful 2006 reelection bid, Griffin submitted her resignation as Chairman effective November 15, 2006. Her brother, Mark Obenshain, is a State Senator from Harrisonburg in the Virginia General Assembly. Both are the children of the late Richard D. Obenshain.\n\nEd Gillespie was elected as the new Chairman of the RPV on December 2, 2006. He resigned on June 13, 2007 to become the counselor to President George W. Bush. Mike Thomas served as interim chairman until July 21 when former Lieutenant Governor of Virginia John H. Hager was elected chairman. On April 9, 2007 the RPV named Fred Malek to serve as the Finance Chairman and Lisa Gable to serve as the Finance Committee Co-Chair.\n\nOn May 31, 2008, Hager was defeated in his bid for re-election at a statewide GOP convention by a strongly conservative member of the House of Delegates, Jeff Frederick of Prince William County. Frederick, who was then 32 years old, was the fifth party chairman in five years. On April 4, 2009, Frederick was removed from the position by RPV's State Central Committee, in a move backed by most of the senior GOP establishment. Many argued that Frederick's election and later removal was a war within the party between insiders and outsiders, or grassroots versus establishment Republicans. After his removal, Frederick considered seeking the chairman job again at the party's May 2009 convention, but decided against it. Pat Mullins, who was then the chairman of the Louisa County party unit and formerly the chairman of the Fairfax County party unit, was selected on May 2, 2009, to serve in the interim before a special election at state party convention later that month. Mullins won the special election at the May 30, 2009, convention, defeating Bill Stanley, the Franklin County chairman. Mullins was re-elected at the party's June 2012 convention. Mullins announced his retirement on November 5, 2014, a day after the Virginia GOP had a strong showing in the 2014 elections. 10th District Republican Committee chairman John Whitbeck was elected on January 24, 2015, by the party's State Central Committee to serve out the remainder of Mullins's term.\n\nWhitbeck faced a challenge for the chairmanship for the 2016 election at the party's state convention from Vince Haley, who unsuccessfully ran for the Republican nomination for state senate in the 12th state Senate district in 2015. Haley withdrew his candidacy in early 2016, then tried to re-enter before the convention. At the convention, the party nominations committee ruled that Haley did not qualify to seek the office, and Whitbeck was re-elected unopposed to a full four-year term. Whitbeck resigned from his position on July 21, 2018, due to differences with Corey Stewart, the party's nominee for U.S. Senate in that year's race for U.S. Senate. In September 2018, Jack R. Wilson, the party's 4th Congressional District Chairman since 2007 and a lawyer from Chesterfield County, was elected to fill the balance of Whitbeck's term. The current chairman is former Delegate Rich Anderson, who was elected to a four-year term on August 15, 2020.\n\nPrior to the January 6 joint session of the United States Congress to certify Joe Biden's win, Republican Delegates Dave LaRock (Loudon), Mark Cole (Fauquier), and Ronnie Campbell (R-Lexington) sent a letter to Vice President Mike Pence urging him to nullify Virginia's electoral results. Democratic Speaker of the House Elieen Filler-Corn punished the members by stripping them of their committee assignments.\n\nRepublican 2021 candidate for Governor Sen. Amanda Chase attended the rally prior to the January 6 storming of the United States Capitol. After the riot that left one person dead, party chairman Rich Anderson said in a statement \"I and Virginia Republicans across our great Commonwealth condemn these despicable acts without reservation or hesitation.\"\n\nDemocratic Party of Virginia Chairwoman Susan Swecker quickly condemned the Republican officials, saying \"The Republican Party has made their disdain for democracy clear, and every elected GOP official has been complicit.\"", "doc_id": "8e780a10-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Theralizumab", "document": "Theralizumab (also known as TGN1412, CD28-SuperMAB, and TAB08) is an immunomodulatory drug developed by Thomas H\u00fcnig of the University of W\u00fcrzburg. It was withdrawn from development after inducing severe inflammatory reactions as well as chronic organ failure in the first-in-human study by PAREXEL in London in March 2006.The developing company, TeGenero Immuno Therapeutics, went bankrupt later that year. The commercial rights were then acquired by a Russian startup, TheraMAB. The drug was renamed TAB08. Phase I and II clinical trials have been completed for arthritis and clinical trials have been initiated for cancer.\n\nOriginally intended for the treatment of B cell chronic lymphocytic leukemia (B-CLL) and rheumatoid arthritis, TGN1412 is a humanised monoclonal antibody that not only binds to, but is a strong agonist for, the CD28 receptor of the immune system's T cells. CD28 is the co-receptor for the T cell receptor; It binds to receptors on the interacting partner in the reaction through one of its ligands (B7 family).\n\nThe drug, which was designated as an orphan medical product by the European Medicines Agency in March 2005, was developed by TeGenero Immuno Therapeutics, tested by Parexel and manufactured by Boehringer Ingelheim. TeGenero announced the first elucidation of the molecular structure of CD28 almost exactly one year prior to commencement of the TGN1412 phase I clinical trial.\n\nMice of the inbred strain BALB/c were immunized with recombinant human CD28-Fc fusion proteins and boosted with a B lymphoma cell line transfected to express human CD28. Hybridomas were obtained by fusing B cells with the hybridoma partner X63Ag8.653 and screened for reactivity with human CD28 and TCR-independent mitogenic activity. Two monoclonals called 5.11A1 and 9D7 were identified. The more active of the two, 5.11A1, is a mouse IgG1 immunoglobulin.\n\nThe complementarity-determining regions of 5.11A1 were cloned into the framework of human IgG and combined with IgG1 (TGN1112) or IgG4 (TGN1412) constant regions. According to the company's Investigator Brochure, \"TGN1412 is a humanised monoclonal antibody directed against the human CD28 antigen. The molecule was genetically engineered by transfer of the complementarity determining regions (CDRs) from heavy and light chain variable region sequences of a monoclonal mouse anti-humanC28 antibody (5.11A1, Luhder et al., 2003) into human heavy and light chain variable frameworks. Humanised variable regions were subsequently recombined with a human gene coding for the IgG4 gamma chain and with a human gene coding for a human kappa chain, respectively.\"\n\nThe recombinant genes were transfected into Chinese hamster ovary cells and the recombinant antibody harvested from culture supernatant.\n\nCritics argued that the company should have anticipated that the drug would provoke a severe reaction in humans. An immunologist contacted by New Scientist and who wished to remain anonymous said \"You don't need to be a rocket scientist to work out what will happen if you non-specifically activate every T cell in the body.\"While the drug had appeared to be safe in animal models, researchers noted that there were reasons why these may not be indicative of the response in humans, particularly with respect to this type of drug. The BBC reported that \"two of 20 monkeys used in earlier tests suffered an increase in the size of lymph nodes,\" but that \"this information was given to the men and submitted to the test regulators.\" TeGenero said this was transient and was evidence of the extra T cells that the drug produces. Experiments with another drug affecting the CD28 receptor (but to a lesser extent than TGN1412) had also shown side effects in human trials. There have been criticisms that the risks taken and the design of the protocol were insufficiently justified by proper statistical evidence.\n\nCritics of animal testing have cited the case to argue that experiments on nonhuman animals, even in species closely related to humans, are not necessarily predictive of human responses, and cannot justify the harm inflicted on animals or the resultant risks to humans.", "doc_id": "8e780ae2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Electric_power_industry", "document": "The electric power industry covers the generation, transmission, distribution and sale of electric power to the general public and industry. The commodity sold is actually energy, not power, e.g. consumers pay for kilowatt-hours, power multiplied by time, which is energy. The commercial distribution of electricity started in 1882 when electricity was produced for electric lighting. In the 1880s and 1890s, growing economic and safety concerns lead to the regulation of the industry. What was once an expensive novelty limited to the most densely populated areas, reliable and economical electric power has become an essential aspect for normal operation of all elements of developed economies.\n\nBy the middle of the 20th century, electricity was seen as a \"natural monopoly\", only efficient if a restricted number of organizations participated in the market; in some areas, vertically-integrated companies provide all stages from generation to retail, and only governmental supervision regulated the rate of return and cost structure.\n\nSince the 1990s, many regions have broken up the generation and distribution of electric power.[citation needed] While such markets can be abusively manipulated with consequent adverse price and reliability impact to consumers, generally competitive production of electrical energy leads to worthwhile improvements in efficiency. However, transmission and distribution are harder problems since returns on investment are not as easy to find.\n\nThe electric power industry is commonly split up into four processes. These are electricity generation such as a power station, electric power transmission, electricity distribution and electricity retailing. In many countries, electric power companies own the whole infrastructure from generating stations to transmission and distribution infrastructure. For this reason, electric power is viewed as a natural monopoly. The industry is generally heavily regulated, often with price controls and is frequently government-owned and operated. However, the modern trend has been growing deregulation in at least the latter two processes.\n\nThe nature and state of market reform of the electricity market often determines whether electric companies are able to be involved in just some of these processes without having to own the entire infrastructure, or citizens choose which components of infrastructure to patronise. In countries where electricity provision is deregulated, end-users of electricity may opt for more costly green electricity.\n\nAll forms of electricity generation have positive and negative aspects. Technology will probably eventually declare the most preferred forms, but in a market economy, the options with less overall costs generally will be chosen above other sources. It is not clear yet which form can best meet the necessary energy demands or which process can best solve the demand for electricity. There are indications that renewable energy is rapidly becoming the most viable in economic terms. A diverse mix of generation sources reduces the risks of electricity price spikes.\n\nElectric power transmission is the bulk movement of electrical energy from a generating site, such as a power plant, to an electrical substation. The interconnected lines which facilitate this movement are known as a transmission network. This is distinct from the local wiring between high-voltage substations and customers, which is typically referred to as electric power distribution. The combined transmission and distribution network is known as the \"power grid\" in North America, or just \"the grid\". In the United Kingdom, India, Malaysia and New Zealand, the network is known as the National Grid.\n\nA wide area synchronous grid, also known as an \"interconnection\" in North America, directly connects many generators delivering AC power with the same relative frequency numerous consumers. For example, there are four major interconnections in North America (the Western Interconnection, the Eastern Interconnection, the Quebec Interconnection and the Electric Reliability Council of Texas (ERCOT) grid). In Europe one large grid connects most of continental Europe.\n\nElectric power distribution is the final stage in the delivery of electric power; it carries electricity from the transmission system to individual consumers. Distribution substations connect to the transmission system and lower the transmission voltage to medium voltage ranging between 2 kV and 35 kV with the use of transformers. Primary distribution lines carry this medium voltage power to distribution transformers located near the customer's premises. Distribution transformers again lower the voltage to the utilization voltage used by lighting, industrial equipment or household appliances. Often several customers are supplied from one transformer through secondary distribution lines. Commercial and residential customers are connected to the secondary distribution lines through service drops. Customers demanding a much larger amount of power may be connected directly to the primary distribution level or the subtransmission level.", "doc_id": "8e780bd2-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/RAF_Coningsby", "document": "Royal Air Force Coningsby or RAF Coningsby (IATA: QCY, ICAO: EGXC), is a Royal Air Force (RAF) station located 13.7 kilometres (8.5 mi) south-west of Horncastle, and 15.8 kilometres (9.8 mi) north-west of Boston, in the East Lindsey district of Lincolnshire, England. It is a Main Operating Base of the RAF and home to three front-line Eurofighter Typhoon FGR4 units, No. 3 Squadron, No. 11 Squadron and No. 12 Squadron. In support of front-line units, No. 29 Squadron is the Typhoon Operational Conversion Unit and No. 41 Squadron is the Typhoon Operational Evaluation Unit. Coningsby is also the home of the Battle of Britain Memorial Flight (BBMF) which operates a variety of historic RAF aircraft.\n\nPlans for an airfield at Coningsby began in 1937 as part of the RAF's expansion plan. However progress in the compulsory purchase of the land was slow and delayed the start of work for two years. The station opened during the Second World War on 4 November 1940 under No. 5 Group, part of RAF Bomber Command. The first flying unit, No. 106 Squadron with the Handley Page Hampden medium bomber, arrived in February 1941, with active operations taking place the following month when four Hampdens bombed Cologne in Germany. The squadron was joined in April 1941 by No. 97 Squadron equipped with Avro Manchester medium bombers. In May 1942, aircraft from Coningsby participated in the 'Thousand Bomber' raid on Cologne.\n\nThe original grass runways were found to be unsuitable for heavy bomber operations so the station was closed for nearly a year between September 1942 and August 1943, whilst paved runways were laid in preparation for accommodating such aircraft. At the same time further hangars were constructed.\n\nThe first unit to return was the now-famous No. 617 'Dambusters' Squadron. Equipped with Avro Lancaster heavy bombers, the squadron was stationed at Coningsby from August 1943. Due to its specialist nature, the Dambusters carried out limited operations whilst at Coningsby, with the most notable being Operation Garlic, a failed raid targeting the Dortmund-Ems canal in Germany, when five out of the eight Lancasters on the mission failed to return home. As the squadron required more space, it moved to nearby RAF Woodhall Spa in January 1944, swapping places with another Lancaster unit, No. 619 Squadron, which itself later moved on to RAF Dunholme Lodge.\n\nFurther Lancaster squadrons were based at Coningsby during the final months of the war, including No. 61 Squadron from RAF Skellingthorpe, No. 83 Squadron and No. 97 Squadron.\n\nFollowing the Second World War, Coningsby was home to the Mosquito-equipped No. 109 Squadron and No. 139 Squadron, then became part of No. 3 Group, with Boeing Washington aircraft from 1950. On 17 August 1953 52-year-old Air Vice-Marshal William Brook, the Air Officer Commanding of No. 3 Group, took off from the base in a Gloster Meteor, and crashed into a Dutch barn at Bradley, Staffordshire.\n\nThe TSR2's intended replacement\u2014the American General Dynamics F-111 Aardvark\u2014was shelved on 16 January 1968 when its costs overshot the UK's budget (it would have cost \u00a3425m for 50 aircraft). The TSR2 had large development costs, whereas the F-111 (also known as Tactical Fighter Experimental, or TFX) could be bought off the shelf. Coningsby was planned to get the F-111K, the RAF version of the F-111; also in the 1966 Defence White Paper, it was intended that the Anglo-French AFVG, later the UKVG, would replace the TSR2 (it did eventually as the Panavia Tornado). 50 F-111Ks were planned with 100 AFVGs (to enter service by 1970); Denis Healey claimed the F-111s and AFVGs would be cheaper than the TSR2 programme (158 aircraft) by \u00a3700m. As Minister of Aviation throughout 1965, the Labour MP Roy Jenkins had also wanted to similarly cancel the Olympus-powered Concorde, but the 1962 Anglo-French treaty imposed prohibitively steep financial penalties for cancellation; the Hawker Siddeley P.1154 and HS.681 were cancelled at the same time.\n\nAFVGs were also planned to replace the Buccaneer in the Royal Navy\u2014Tornados were never flown by the Royal Navy, as the carriers for them, the CVA-01s, were cancelled. But the Royal Navy did operate fourteen Phantoms on HMS Ark Royal, until the new smaller carriers entered service\u201448 Phantoms had been designated for the Fleet Air Arm, with twenty of these ending up at RAF Leuchars, and Ark Royal's Phantoms ended up at Leuchars in 1978. HMS Eagle was never converted to Phantom use as it was deemed too expensive, and the carrier was scrapped in January 1972, with its Sea Vixen aircraft. Another alternative considered by the Labour government in July 1965 for the TSR-2 was to order Rolls-Royce Spey-engined French Mirage IV aircraft, to be known as the Mirage IVS; it would have avionics from the TSR-2, and be partly made by BAC at Warton.\n\nWith the running down of RAF Coltishall in Norfolk, No. 6 Squadron relocated with their SEPECAT Jaguars to Coningsby on 1 April 2006, where it was planned they would operate from until October 2007. However, on 25 April 2007 it was announced by the Ministry of Defence that the Jaguars would be withdrawn from service on 30 April. May 2007 saw No. 6 Squadron flying their Jaguars to RAF Cosford where they would be utilised by No. 1 SoTT. No. 6 Squadron disbanded on 31 May 2007. Deliveries continued in June and July, with the last Jaguar to arrive at Cosford from Coningsby being XX119 on 2 July 2007.\n\nConingsby was the first airfield to receive the Phantoms, the Tornado ADV and was the first to receive its replacement, the Eurofighter Typhoon. Typhoon arrived in May 2005 with No. 17 Squadron, after the RAF first publicly displayed the aircraft at Coningsby in December 2004. No. 3(F) Squadron moved to RAF Coningsby where it became the first operational front line RAF Typhoon squadron in July 2007 and No. 11(F) Squadron became operational at RAF Coningsby shortly thereafter.\n\nNo. 12 Squadron reactivated in July 2018 and is temporarily integrating Qatari Emri Air Force air and ground crews in order to provide training and support as part of the Qatari purchase of twenty-four Typhoons from the UK.", "doc_id": "8e780cfe-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Atlantis_(Aquaman)", "document": "Atlantis (sometimes called the Kingdom of Atlantis or the Atlantean Empire) is an aquatic civilization appearing in American comic books published by DC Comics commonly associated with the superhero, Aquaman. It is one of the numerous depictions of Atlantis within DC Comics and is, perhaps, the most recognizable depiction within DC Comics. The version of the city first appeared in Adventure Comics #260 (May 1959), and was created by Robert Bernstein and Ramona Fradon.\n\nAn aquatic nation whose people are human-like creatures with varying levels of biological, aquatic adaptations, Atlantis is considered within DC Universe one of the first and oldest civilizations in the history of the Earth as well as one of the most powerful. In most continuities and stories, Atlantis is a hereditary monarchy that was founded by a powerful race of magic users known as the Homo magi (sometimes referred to as ancient Atlanteans or Atlanteans) and over the course of its history, became an epicenter of magic and science alike. Eventually, the nation would collapse into the bottom of the ocean and its people would adapt and evolved into the modern, aquatic Atlanteans. Overtime, the nation's history would be embordered in conflict with regards to its succession of rulers, the nation's status as a superpower, its fictional cultural heritage, and its relationship with the global world in the modern age.\n\nThe Kingdom of Atlantis made its cinematic debut in the 2017 film Justice League, set in the DC Extended Universe, and was later more prominently featured in the 2018 film Aquaman.\n\nThe continent of Atlantis was settled 65,000,000 years ago, by a humanoid extraterrestrial race known as the Hunter/Gatherers, who proceeded to hunt the animals to extinction. One million years ago, Atlantean society flourished alongside Homo erectus, the precursors of modern man. This apparently occurred long before the intervention of the genetic tampering with the Metagene.\n\nThousands of years ago, magic levels on Earth began to drop due to the sleeping entity known as Darkworld beginning to awaken. The Atlantean sorceress, Citrina, struck a deal with the Lords of Chaos who ruled Gemworld, so she would be allowed to create a home there for those Homo magi and magic dependent species such as the Faerie, Elves, Centaurs, and so forth who wished to emigrate from Earth. Gemworld was colonized by Homo magi emigrants from Earth made up of the 12 ruling houses of Atlantis.\n\nDarkworld was a dimension formed by the body of an unnamed cosmic entity who later fell into a deep sleep. This entity's dreams were responsible for creating the first Lords of Chaos and Order, Chaon (chaos), Gemimn (order), and Tynan the Balancer. These beings and others were worshiped as gods by the citizens of Atlantis. Darkworld was tethered to Atlantis by a massive \"chain\" created by Deedra, goddess of nature. Some Atlantean magicians such as Arion and Garn Daanuth later learned to tap the mystic energies of Darkworld, enabling them to wield nearly godlike power.\n\nEventually, Atlantis came to be the center of early human civilization. Its king, Orin, ordered the construction of a protective dome over the city simply as a defense against barbarian tribes, but shortly afterward a meteor crashed into the earth, destroying most of the upper world and sinking the city to the bottom of the ocean. Orin's brother, Shalako, departed with a number of followers through tunnels in order to reclaim another sunken city of their empire, Tritonis, whose inhabitants had not survived. After a few years, Atlantean scientists developed a serum that would permanently let their people breathe underwater; as a consequence of the magic used by Shalako in settling Tritonis, the Tritonians were further mutated to have fish-tails instead of legs. Some descendants of Shalako's son Dardanus also inherited his telepathy, which was marked by blonde hair, extremely rare among Atlanteans. Dardanus's son Kordax further had the ability to command sea creatures. After he led them alongside the Tritonians in a revolution against the king, he was exiled, and children born with blond hair, the \"mark of Kordax\" were generally viewed as aberrations and abandoned to die.\n\nIn the DC Universe, the Homo magi originated on the lost continent of Atlantis. The continent was a focal point for unharnessed magical energies (wild magic), and the local Homo sapiens evolved into Homo magi as a result of their exposure to these energies. Upon the fall of Atlantis, people who carried the predisposition for magic were scattered to the four winds. Today, every human being capable of casting spells is a descendant of the Atlantean Homo magi.", "doc_id": "8e7811ae-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Historicity_of_the_Homeric_epics", "document": "The extent of the historical basis of the Homeric epics has been a topic of scholarly debate for centuries. While researchers of the 18th century had largely rejected the story of the Trojan War as fable, the discoveries made by Heinrich Schliemann at Hisarlik reopened the question in modern terms, and the subsequent excavation of Troy VIIa and the discovery of the toponym \"Wilusa\" in Hittite correspondence has made it plausible that the Trojan War cycle was at least remotely based on a historical conflict of the 12th century BC, even if the poems of Homer are removed from the event by more than four centuries of oral tradition.\n\nIn Ancient Greece, the Trojan War was generally regarded as a real event, though the particular details of the story were considered up for debate. For instance, Herodotus argued that Homer had exaggerated the story and that the Trojans had been unable to return Helen because she was in fact in Egypt. When sixth century Athenians cited Homer to justify their side in a territorial dispute with Megara, the Megarans responded by accusing the Athenians of falsifying the text.\n\nThe Trojan War continued to be regarded as essentially historical during the Roman empire, even after its Christianization. In the time of Strabo, topographic writings discussed the identity of sites mentioned by Homer. Eusebius of Caesarea's influential Chronologia gave Troy the same historical weight as Abraham in his universal history of humankind. Jerome's Chronicon followed Eusebius, and all the medieval chroniclers began with summaries of the universal history of Jerome.\n\nMedieval Europeans continued to accept the Trojan War as historical, often claiming descent from Trojan heroes. Geoffrey of Monmouth's pseudo-genealogy traced a Trojan origin for royal Britons in Historia Regum Britanniae, and Fredegar gave a similar origin myth for the Merovingians in which they were descended from a legendary King Francio, who had built a new Troy at Treves.\n\nIn the 1870s, Heinrich Schliemann reopened the question with his archaeological exacavations at Hisarlik. This site had been previously identified as Classical Ilion, and thus as the location where ancients had believed the mythic war to have occurred. Underneath the classical city, Schliemann found the remains of numerous earlier settlements, one of which he declared to be that of the mythic city. Subsequent excavations have shown that this city was in fact a millenium too early to have coexisted with Mycenaean palaces.\n\nSince Schliemann, the site has been further excavated and reappraised numerous times, with particular attention to the layers which did coexist with the Mycenaeans, known collectively as Late Bronze Age Troy. Additional lines of research have included excavations at other sites such as Mycenae, potential references to Troy in Hittite records, as well as philological study of the Iliad and the Odyssey themselves. Despite these achievements, there remains no consensus for or against a real Trojan War, and some scholars regard the truth as unknowable.\n\nThe more that is known about Bronze Age history, the clearer it becomes that it is not a yes-or-no question but one of educated assessment of how much historical knowledge is present in Homer, and whether it represents a retrospective memory of Dark Age Greece, as Finley concludes, or of Mycenaean Greece, which is the dominant view of A Companion to Homer, A.J.B. Wace and F.H. Stebbings, eds. (New York/London: Macmillan 1962). The particular narrative of the Iliad is not an account of the war, but a tale of the psychology, the wrath, vengeance and death of individual heroes, which assumes common knowledge of the Trojan War as a back-story. No scholars now assume that the individual events of the tale (many of which involve divine intervention) are historical fact; however, no scholars claim that the story is entirely devoid of memories of Mycenaean times.\n\nHowever, in addressing a separate controversy, Oxford Professor of Greek, Martin L. West indicated that such an approach \"misconceives\" the problem, and that Troy probably fell to a much smaller group of attackers in a much shorter time.\n\nAnother opinion is that Homer was heir to an unbroken tradition of oral epic poetry reaching back some 500 years into Mycenaean times. The case is set out in The Singer of Tales by Albert B. Lord, citing earlier work by folklorist and mythographer Milman Parry. In this view, the poem's core could represent a historical campaign that took place at the eve of the Mycenaean era. Much legendary material may have been added, but in this view it is meaningful to ask for archaeological and textual evidence corresponding to events referred to in the Iliad. Such a historical background would explain the geographical knowledge of Hisarl\u0131k and the surrounding area, which could alternatively have been obtained, in Homer's time, by visiting the site. Some verses of the Iliad have been argued to predate Homer's time, and could conceivably date back to the Mycenaean era. Such verses only fit the poem's meter if certain words are pronounced with a /w/ sound, which had vanished from most dialects of Greece by the 7th century BC.", "doc_id": "8e78130c-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Dextromethorphan/quinidine", "document": "Dextromethorphan/quinidine, sold under the brand name Nuedexta, is a fixed-dose combination medication for the treatment of pseudobulbar affect (PBA). It contains dextromethorphan (DXM) and the class I antiarrhythmic agent quinidine.\n\nDextromethorphan/quinidine was approved for medical use in the United States in October 2010, and is marketed by Avanir Pharmaceuticals.\n\nDXM/quinidine is used in the treatment of PBA. In a 12-week randomized, double-blind trial, amyotrophic lateral sclerosis and multiple sclerosis patients with significant PBA were given either Nuedexta 20/10 mg or placebo. In 326 randomized patients, the PBA-episode daily rate was 46.9% (p < 0.0001) lower for Nuedexta than for placebo. The three deaths in each of the two drug treatment arms and the single death in the placebo arm of the study were believed to be due to the natural course of the disease.\n\nDextromethorphan acts as a \u03c31 receptor agonist, serotonin\u2013norepinephrine reuptake inhibitor, and NMDA receptor antagonist, while quinidine is an antiarrhythmic agent acting as a CYP2D6 inhibitor. Quinidine prevents the metabolism of dextromethorphan into its active metabolite dextrorphan, which is a much more potent NMDA receptor antagonist but much less potent serotonin reuptake inhibitor than dextromethorphan. The mechanism of action of dextromethorphan/quinidine in the treatment of PBA is unknown.\n\nDextromethorphan/quinidine was investigated for the treatment of agitation associated with dementia, diabetic neuropathy, drug-induced dyskinesia, migraine, and neuropathic pain, but development for these indications was discontinued. Another formulation, deudextromethorphan/quinidine, is still under investigation for various indications. These include agitation, schizophrenia, and major depressive disorder, among others.", "doc_id": "8e7813b6-42e9-11ed-a0a2-3e22fbbc18d6"} {"website": "https://en.wikipedia.org/wiki/Totonac_languages", "document": "Totonac is a Totonacan language cluster of Mexico, spoken across a number of central Mexican states by the Totonac people. It is a Mesoamerican language and shows many of the traits which define the Mesoamerican Linguistic Area. Along with some 62 other indigenous languages, it is recognised as an official language of Mexico, though as a single language.\n\nThe Totonac people are an indigenous group native to Totonacapan along the Gulf of Mexico. The Gulf of Mexico stretches from the Texan border to the Yucat\u00e1n Peninsula. It includes the greatest topographic diversity in the country and contains a great variety of ecozones as well as microhabitats. The Totonac people share their territory with the Nahua, Otom\u00ed, and Tepehua (not to be confused with the Tepehuano language), all of which have communities within the region. Totonacapan is located in east central Mexico between present day Puebla and Veracruz. The people of Totonac have migrated to various cities such as Veracruz, Puebla, and Mexico City. Totonac populations are also found in colonized regions of Uxpanapa in southern Veracruz and the state of Quintana Roo in the eastern part of the Yucat\u00e1n Peninsula. The Totonac inhabit two different types of environments: cool and rainy mesas of high altitude and warm and humid coastal lowlands.\n\nThere are some sources that claim that the term Totonac, as explained by residents, means \"people that come from where the sun rises.\" Other explanations of the term consist of derogatory meanings that indicate little capacity or ability to understand. However, there are other interpretations of the term which state that Totonac consist of the explanation that toto translates to \"three\" while naco translates to \"coraz\u00f3n\" for the overall meaning of totonaco becoming \"three hearts\".\n\nMorphology in Filomeno Mata Totonac includes inflection, derivation, and compounding. Adjectives in this language have reduplication, which can also be seen throughout the use of this language. Speakers prefer to use verbal expressions more generally throughout their everyday way of speaking such as using words like \"'instead of \u2018visitors\u2019, tiintamim\u00e1ana \u2018those who are coming\u2019; instead of \u2018seamstresses\u2019, tiintsapanan\u00e1h \u2018those who sew\u2019.\"Filomeno Mata Totonac is a verb-centric language and includes non-verbal elements as well. Filomeno Mata Totonac marks subject and object on the verb. Nouns in this language have a variety of structures of morphology. Regarding pronouns, there are no gender distinctions within this language. \"Only one set of personal pronouns exists which may be used for subjects or objects.\" The speakers of this language switch between the first person pronoun using either i-, a-, or e-. There is a contradiction between the language speakers of this language regarding the third person pronouns either using 'uu' or 'tsam\u00e1' because it can be used in different ways in a sentence.\n\nRegarding possession in Filomeno Mata Totonac, nouns can be inflected for possession however adjectives cannot be. With Kinship terms it always has possessive markers. It states that, \" Body part nouns and nouns referring to items of clothing are also almost always possessed.\" The possessive prefixes are kin- for first person, min- for second person, and \u0161- for third person. kan- is used when it is suffixed to the noun and this happens when a plural possessor is involved.\n\nIt is common in this language that possessive affixation does not affect stress except in the noun 'house' which is \u0109ik\u1ecb in Filomeno Mata Totonac. What happens with this particular word is that it the stress will shift into the prefix if it is 1st or 2nd person singular or take the plural suffix which will always carry the stress and it will look like \"kin\u0109ikk\u00e1n \u2018our house\u2019, or min\u0109ikk\u00e1n for \u2018your house\u2019.\"When the noun referring to the possessor appears with the possessed noun, the order is POSSESSED-POSSESSOR, with the first noun affixed with possessive marker(s).\" This is consistent with the word order being VSO but can change to rule out adjective-noun word order in this case. When plural nouns are possessed, the possessive affixes occur outside of the plural morphemes. This is shown in the following table to give examples of this being portrayed.\n\nFilomeno Mata Totonac has a very interesting way of describing numbers and writing them out. It states that \"the number roots from 11\u201319 are composed roughly of a \u2018ten\u2019 prefix and the numerals from 1\u20139. The numerals up to twenty prefixed by the general numeral classifier \u2019aq-, also used for spherical objects.\" The table below will only show the numbers from 1\u201320.", "doc_id": "8e781514-42e9-11ed-a0a2-3e22fbbc18d6"} {"source": "NCERT XII History, India", "document": "There were several archaeological cultures in the region prior to the Mature Harappan. These cultures were associated with distinctive pottery, evidence of agriculture and pastoralism, and some crafts. Settlements were generally small, and there were virtually no large buildings. It appears that there was a break between the Early Harappan and the Harappan civilisation, evident from large-scale burning at some sites, as well as the abandonment of certain settlements. Archaeologists generally use certain strategies to find out whether there were social or economic differences amongst people living within a particular culture. These include studying burials. You are probably familiar with the massive pyramids of Egypt, some of which were contemporaneous with the Harappan civilisation. Many of these pyramids were royal burials, where enormous quantities of wealth was buried. At burials in Harappan sites the dead were generally laid in pits. Sometimes, there were differences in the way the burial pit was made \u2013 in some instances, the hollowed-out spaces were lined with bricks. Could these variations be an indication of social differences? We are not sure. Some graves contain pottery and ornaments, perhaps indicating a belief that these could be used in the afterlife. Jewellery has been found in burials of both men and women. In fact, in the excavations at the cemetery in Harappa in the mid-1980s, an ornament consisting of three shell rings, a jasper (a kind of semi-precious stone) bead and hundreds of micro beads was found near the skull of a male. In some instances the dead were buried with copper mirrors. But on the whole, it appears that the Harappans did not believe in burying precious things with the dead. Another strategy to identify social differences is to study artefacts, which archaeologists broadly classify as utilitarian and luxuries. The first category includes objects of daily use made fairly easily out of ordinary materials such as stone or clay. These include querns, pottery, needles, flesh-rubbers (body scrubbers), etc., and are usually found distributed throughout settlements. Archaeologists assume objects were luxuries if they are rare or made from costly, non-local materials or with complicated technologies. Thus, little pots of faience (a material made of ground sand or silica mixed with colour and a gum and then fired) were probably considered precious because they were difficult to make. The situation becomes more complicated when we find what seem to be articles of daily use, such as spindle whorls made of rare materials such as faience. Do we classify these as utilitarian or luxuries? If we study the distribution of such artefacts, we find that rare objects made of valuable materials are generally concentrated in large settlements like Mohenjodaro and Harappa and are rarely found in the smaller settlements. For example, miniature pots of faience, perhaps used as perfume bottles, are found mostly in Mohenjodaro and Harappa, and there are none from small settlements like Kalibangan. Gold too was rare, and as at present, probably precious \u2013 all the gold jewellery found at Harappan sites was recovered from hoards. The variety of materials used to make beads is remarkable: stones like carnelian (of a beautiful red colour), jasper, crystal, quartz and steatite; metals like copper, bronze and gold; and shell, faience and terracotta or burnt clay. Some beads were made of two or more stones, cemented together, some of stone with gold caps. The shapes were numerous \u2013 disc\u0002shaped, cylindrical, spherical, barrel-shaped, segmented. Some were decorated by incising or painting, and some had designs etched onto them. Perhaps the most unique feature of the Harappan civilisation was the development of urban centres. Let us look at one such centre, Mohenjodaro, more closely. Although Mohenjodaro is the most well-known site, the first site to be discovered was Harappa. The settlement is divided into two sections, one smaller but higher and the other much larger but lower. Archaeologists designate these as the Citadel and the Lower Town respectively. The Citadel owes its height to the fact that buildings were constructed on mud brick platforms. It was walled, which meant that it was physically separated from the Lower Town. The Lower Town was also walled. Several buildings were built on platforms, which served as foundations. It has been calculated that if one labourer moved roughly a cubic metre of earth daily, just to put the foundations in place it would have required four million person-days, in other words, mobilising labour on a very large scale. Consider something else. Once the platforms were in place, all building activity within the city was restricted to a fixed area on the platforms. So it seems that the settlement was first planned and then implemented accordingly. Other signs of planning include bricks, which, whether sun-dried or baked, were of a standardised ratio, where the length and breadth were four times and twice the height respectively. Such bricks were used at all Harappan settlements. One of the most distinctive features of Harappan cities was the carefully planned drainage system. If you look at the plan of the Lower Town you will notice that roads and streets were laid out along an approximate \u201cgrid\u201d pattern, intersecting at right angles. It seems that streets with drains were laid out first and then houses built along them. If domestic waste water had to flow into the street drains, every house needed to have at least one wall along a street.", "doc_id": "f57a254e-492a-11ed-9a68-0242ac110007"} {"source": "NCERT XII History, India", "document": "There were several archaeological cultures in the region prior to the Mature Harappan. These cultures were associated with distinctive pottery, evidence of agriculture and pastoralism, and some crafts. Settlements were generally small, and there were virtually no large buildings. It appears that there was a break between the Early Harappan and the Harappan civilisation, evident from large-scale burning at some sites, as well as the abandonment of certain settlements. Archaeologists generally use certain strategies to find out whether there were social or economic differences amongst people living within a particular culture. These include studying burials. You are probably familiar with the massive pyramids of Egypt, some of which were contemporaneous with the Harappan civilisation. Many of these pyramids were royal burials, where enormous quantities of wealth was buried. At burials in Harappan sites the dead were generally laid in pits. Sometimes, there were differences in the way the burial pit was made \u2013 in some instances, the hollowed-out spaces were lined with bricks. Could these variations be an indication of social differences? We are not sure. Some graves contain pottery and ornaments, perhaps indicating a belief that these could be used in the afterlife. Jewellery has been found in burials of both men and women. In fact, in the excavations at the cemetery in Harappa in the mid-1980s, an ornament consisting of three shell rings, a jasper (a kind of semi-precious stone) bead and hundreds of micro beads was found near the skull of a male. In some instances the dead were buried with copper mirrors. But on the whole, it appears that the Harappans did not believe in burying precious things with the dead. Another strategy to identify social differences is to study artefacts, which archaeologists broadly classify as utilitarian and luxuries. The first category includes objects of daily use made fairly easily out of ordinary materials such as stone or clay. These include querns, pottery, needles, flesh-rubbers (body scrubbers), etc., and are usually found distributed throughout settlements. Archaeologists assume objects were luxuries if they are rare or made from costly, non-local materials or with complicated technologies. Thus, little pots of faience (a material made of ground sand or silica mixed with colour and a gum and then fired) were probably considered precious because they were difficult to make. The situation becomes more complicated when we find what seem to be articles of daily use, such as spindle whorls made of rare materials such as faience. Do we classify these as utilitarian or luxuries? If we study the distribution of such artefacts, we find that rare objects made of valuable materials are generally concentrated in large settlements like Mohenjodaro and Harappa and are rarely found in the smaller settlements. For example, miniature pots of faience, perhaps used as perfume bottles, are found mostly in Mohenjodaro and Harappa, and there are none from small settlements like Kalibangan. Gold too was rare, and as at present, probably precious \u2013 all the gold jewellery found at Harappan sites was recovered from hoards. The variety of materials used to make beads is remarkable: stones like carnelian (of a beautiful red colour), jasper, crystal, quartz and steatite; metals like copper, bronze and gold; and shell, faience and terracotta or burnt clay. Some beads were made of two or more stones, cemented together, some of stone with gold caps. The shapes were numerous \u2013 disc\u0002shaped, cylindrical, spherical, barrel-shaped, segmented. Some were decorated by incising or painting, and some had designs etched onto them. Perhaps the most unique feature of the Harappan civilisation was the development of urban centres. Let us look at one such centre, Mohenjodaro, more closely. Although Mohenjodaro is the most well-known site, the first site to be discovered was Harappa. The settlement is divided into two sections, one smaller but higher and the other much larger but lower. Archaeologists designate these as the Citadel and the Lower Town respectively. The Citadel owes its height to the fact that buildings were constructed on mud brick platforms. It was walled, which meant that it was physically separated from the Lower Town. The Lower Town was also walled. Several buildings were built on platforms, which served as foundations. It has been calculated that if one labourer moved roughly a cubic metre of earth daily, just to put the foundations in place it would have required four million person-days, in other words, mobilising labour on a very large scale. Consider something else. Once the platforms were in place, all building activity within the city was restricted to a fixed area on the platforms. So it seems that the settlement was first planned and then implemented accordingly. Other signs of planning include bricks, which, whether sun-dried or baked, were of a standardised ratio, where the length and breadth were four times and twice the height respectively. Such bricks were used at all Harappan settlements. One of the most distinctive features of Harappan cities was the carefully planned drainage system. If you look at the plan of the Lower Town you will notice that roads and streets were laid out along an approximate \u201cgrid\u201d pattern, intersecting at right angles. It seems that streets with drains were laid out first and then houses built along them. If domestic waste water had to flow into the street drains, every house needed to have at least one wall along a street.", "doc_id": "226e65d8-492b-11ed-8f22-0242ac110007"} {"source": "NCERT XII History, India", "document": "The variety of materials used to make beads is remarkable: stones like carnelian (of a beautiful red colour), jasper, crystal, quartz and steatite; metals like copper, bronze and gold; and shell, faience and terracotta or burnt clay. Some beads were made of two or more stones, cemented together, some of stone with gold caps. The shapes were numerous \u2013 disc\u0002shaped, cylindrical, spherical, barrel-shaped, segmented. Some were decorated by incising or painting, and some had designs etched onto them.\n\nTechniques for making beads differed according to the material. Steatite, a very soft stone, was easily worked. Some beads were moulded out of a paste made with steatite powder. This permitted making a variety of shapes, unlike the geometrical forms made out of harder stones. How the steatite micro bead was made remains a puzzle for archaeologists studying ancient technology.\n\nArchaeologists\u2019 experiments have revealed that the red colour of carnelian was obtained by firing the yellowish raw material and beads at various stages of production. Nodules were chipped into rough shapes, and then finely flaked into the final form. Grinding, polishing and drilling completed the process. Specialised drills have been found at Chanhudaro, Lothal and more recently at Dholavira. If you locate Nageshwar and Balakot on Map 1, you will notice that both settlements are near the coast. These were specialised centres for making shell objects \u2013 including bangles, ladles and inlay \u2013 which were taken to other settlements. Similarly, it is likely that finished products (such as beads) from Chanhudaro and Lothal were taken to the large urban centres such as Mohenjodaro and Harappa.\n\nIn order to identify centres of craft production, archaeologists usually look for the following: raw material such as stone nodules, whole shells, copper ore; tools; unfinished objects; rejects and waste material. In fact, waste is one of the best indicators of craft work. For instance, if shell or stone is cut to make objects, then pieces of these materials will be discarded as waste at the place of production.\n\nSometimes, larger waste pieces were used up to make smaller objects, but minuscule bits were usually left in the work area. These traces suggest that apart from small, specialised centres, craft production was also undertaken in large cities such as Mohenjodaro and Harappa.\n\nAs is obvious, a variety of materials was used for craft production. While some such as clay were locally available, many such as stone, timber and metal had to be procured from outside the alluvial plain. Terracotta toy models of bullock carts suggest that this was one important means of transporting goods and people across land routes. Riverine routes along the Indus and its tributaries, as well as coastal routes were also probably used.\n\nThe Harappans procured materials for craft production in various ways. For instance, they established settlements such as Nageshwar and Balakot in areas where shell was available. Other such sites were Shortughai, in far-off Afghanistan, near the best source of lapis lazuli, a blue stone that was apparently very highly valued, and Lothal which was near sources of carnelian (from Bharuch in Gujarat), steatite (from south Rajasthan and north Gujarat) and metal (from Rajasthan).\n\nAnother strategy for procuring raw materials may have been to send expeditions to areas such as the Khetri region of Rajasthan (for copper) and south India (for gold). These expeditions established communication with local communities. Occasional finds of Harappan artefacts such as steatite micro beads in these areas are indications of such contact. There is evidence in the Khetri area for what archaeologists call the Ganeshwar-Jodhpura culture, with its distinctive non-Harappan pottery and an unusual wealth of copper objects. It is possible that the inhabitants of this region supplied copper to the Harappans.\n\nRecent archaeological finds suggest that copper was also probably brought from Oman, on the south\u0002eastern tip of the Arabian peninsula. Chemical analyses have shown that both the Omani copper and Harappan artefacts have traces of nickel, suggesting a common origin. There are other traces of contact as well. A distinctive type of vessel, a large Harappan jar coated with a thick layer of black clay has been found at Omani sites. Such thick coatings prevent the percolation of liquids. We do not know what was carried in these vessels, but it is possible that the Harappans exchanged the contents of these vessels for Omani copper.\n\nMesopotamian texts datable to the third millennium BCE refer to copper coming from a region called Magan, perhaps a name for Oman, and interestingly enough copper found at Mesopotamian sites also contains traces of nickel. Other archaeological finds suggestive of long\u0002distance contacts include Harappan seals, weights, dice and beads. In this context, it is worth noting that Mesopotamian texts mention contact with regions named Dilmun (probably the island of Bahrain), Magan and Meluhha, possibly the Harappan region. They mention the products from Meluhha: carnelian, lapis lazuli, copper, gold, and varieties of wood. A Mesopotamian myth says of Meluhha: \u201cMay your bird be the haja-bird, may its call be heard in the royal palace.\u201d Some archaeologists think the haja-bird was the peacock. Did it get this name from its call? It is likely that communication with Oman, Bahrain or Mesopotamia was by sea. Mesopotamian texts refer to Meluhha as a land of seafarers. Besides, we find depictions of ships and boats on seals.", "doc_id": "d0bd73ee-492c-11ed-ba9c-0242ac110007"} {"source": "NCERT XII History, India", "document": "The variety of materials used to make beads is remarkable: stones like carnelian (of a beautiful red colour), jasper, crystal, quartz and steatite; metals like copper, bronze and gold; and shell, faience and terracotta or burnt clay. Some beads were made of two or more stones, cemented together, some of stone with gold caps. The shapes were numerous \u2013 disc\u0002shaped, cylindrical, spherical, barrel-shaped, segmented. Some were decorated by incising or painting, and some had designs etched onto them.\n\nTechniques for making beads differed according to the material. Steatite, a very soft stone, was easily worked. Some beads were moulded out of a paste made with steatite powder. This permitted making a variety of shapes, unlike the geometrical forms made out of harder stones. How the steatite micro bead was made remains a puzzle for archaeologists studying ancient technology.\n\nArchaeologists\u2019 experiments have revealed that the red colour of carnelian was obtained by firing the yellowish raw material and beads at various stages of production. Nodules were chipped into rough shapes, and then finely flaked into the final form. Grinding, polishing and drilling completed the process. Specialised drills have been found at Chanhudaro, Lothal and more recently at Dholavira. If you locate Nageshwar and Balakot on Map 1, you will notice that both settlements are near the coast. These were specialised centres for making shell objects \u2013 including bangles, ladles and inlay \u2013 which were taken to other settlements. Similarly, it is likely that finished products (such as beads) from Chanhudaro and Lothal were taken to the large urban centres such as Mohenjodaro and Harappa.\n\nIn order to identify centres of craft production, archaeologists usually look for the following: raw material such as stone nodules, whole shells, copper ore; tools; unfinished objects; rejects and waste material. In fact, waste is one of the best indicators of craft work. For instance, if shell or stone is cut to make objects, then pieces of these materials will be discarded as waste at the place of production.\n\nSometimes, larger waste pieces were used up to make smaller objects, but minuscule bits were usually left in the work area. These traces suggest that apart from small, specialised centres, craft production was also undertaken in large cities such as Mohenjodaro and Harappa.\n\nAs is obvious, a variety of materials was used for craft production. While some such as clay were locally available, many such as stone, timber and metal had to be procured from outside the alluvial plain. Terracotta toy models of bullock carts suggest that this was one important means of transporting goods and people across land routes. Riverine routes along the Indus and its tributaries, as well as coastal routes were also probably used.\n\nThe Harappans procured materials for craft production in various ways. For instance, they established settlements such as Nageshwar and Balakot in areas where shell was available. Other such sites were Shortughai, in far-off Afghanistan, near the best source of lapis lazuli, a blue stone that was apparently very highly valued, and Lothal which was near sources of carnelian (from Bharuch in Gujarat), steatite (from south Rajasthan and north Gujarat) and metal (from Rajasthan).\n\nAnother strategy for procuring raw materials may have been to send expeditions to areas such as the Khetri region of Rajasthan (for copper) and south India (for gold). These expeditions established communication with local communities. Occasional finds of Harappan artefacts such as steatite micro beads in these areas are indications of such contact. There is evidence in the Khetri area for what archaeologists call the Ganeshwar-Jodhpura culture, with its distinctive non-Harappan pottery and an unusual wealth of copper objects. It is possible that the inhabitants of this region supplied copper to the Harappans.\n\nRecent archaeological finds suggest that copper was also probably brought from Oman, on the south\u0002eastern tip of the Arabian peninsula. Chemical analyses have shown that both the Omani copper and Harappan artefacts have traces of nickel, suggesting a common origin. There are other traces of contact as well. A distinctive type of vessel, a large Harappan jar coated with a thick layer of black clay has been found at Omani sites. Such thick coatings prevent the percolation of liquids. We do not know what was carried in these vessels, but it is possible that the Harappans exchanged the contents of these vessels for Omani copper.\n\nMesopotamian texts datable to the third millennium BCE refer to copper coming from a region called Magan, perhaps a name for Oman, and interestingly enough copper found at Mesopotamian sites also contains traces of nickel. Other archaeological finds suggestive of long\u0002distance contacts include Harappan seals, weights, dice and beads. In this context, it is worth noting that Mesopotamian texts mention contact with regions named Dilmun (probably the island of Bahrain), Magan and Meluhha, possibly the Harappan region. They mention the products from Meluhha: carnelian, lapis lazuli, copper, gold, and varieties of wood. A Mesopotamian myth says of Meluhha: \u201cMay your bird be the haja-bird, may its call be heard in the royal palace.\u201d Some archaeologists think the haja-bird was the peacock. Did it get this name from its call? It is likely that communication with Oman, Bahrain or Mesopotamia was by sea. Mesopotamian texts refer to Meluhha as a land of seafarers. Besides, we find depictions of ships and boats on seals.", "doc_id": "001afd96-492d-11ed-a832-0242ac110007"} {"source": "NCERT XII History, India", "document": "Let us retrace our steps back to the urban centres that emerged in several parts of the subcontinent from c. sixth century BCE. As we have seen, many of these were capitals of mahajanapadas. Virtually all major towns were located along routes of communication. Some such as Pataliputra were on riverine routes. Others, such as Ujjayini, were along land routes, and yet others, such as Puhar, were near the coast, from where sea routes began. Many cities like Mathura were bustling centres of commercial, cultural and political activity.\n\nWe have seen that kings and ruling elites lived in fortified cities. Although it is difficult to conduct extensive excavations at most sites because people live in these areas even today (unlike the Harappan cities), a wide range of artefacts have been recovered from them. These include fine pottery bowls and dishes, with a glossy finish, known as Northern Black Polished Ware, probably used by rich people, and ornaments, tools, weapons, vessels, figurines, made of a wide range of materials \u2013 gold, silver, copper, bronze, ivory, glass, shell and terracotta.\n\nBy the second century BCE, we find short votive inscriptions in a number of cities. These mention the name of the donor, and sometimes specify his/her occupation as well. They tell us about people who lived in towns: washing folk, weavers, scribes, carpenters, potters, goldsmiths, blacksmiths, officials, religious teachers, merchants and kings. \n\nSometimes, guilds or shrenis, organisations of craft producers and merchants, are mentioned as well. These guilds probably procured raw materials, regulated production, and marketed the finished product. It is likely that craftspersons used a range of iron tools to meet the growing demands of urban elites.\n\nWhen historians began reconstructing early Indian history in the nineteenth century, the emergence of the Mauryan Empire was regarded as a major landmark. India was then under colonial rule, and was part of the British empire. Nineteenth and early twentieth century Indian historians found the possibility that there was an empire in early India both challenging and exciting. Also, some of the archaeological finds associated with the Mauryas, including stone sculpture, were considered to be examples of the spectacular art typical of empires. Many of these historians found the message on Asokan inscriptions very different from that of most other rulers, suggesting that Asoka was more powerful and industrious, as also more humble than later rulers who adopted grandiose titles. So it is not surprising that nationalist leaders in the twentieth century regarded him as an inspiring figure.\n\nYet, how important was the Mauryan Empire? It lasted for about 150 years, which is not a very long time in the vast span of the history of the subcontinent. Besides, if you look at Map 2, you will notice that the empire did not encompass the entire subcontinent. And even within the frontiers of the empire, control was not uniform. By the second century BCE, new chiefdoms and kingdoms emerged in several parts of the subcontinent.\n\nThe new kingdoms that emerged in the Deccan and further south, including the chiefdoms of the Cholas, Cheras and Pandyas in Tamilakam (the name of the ancient Tamil country, which included parts of present-day Andhra Pradesh and Kerala, in addition to Tamil Nadu), proved to be stable and prosperous.\n\nWe know about these states from a variety of sources. For instance, the early Tamil Sangam texts (see also Chapter 3) contain poems describing chiefs and the ways in which they acquired and distributed resources. \n\nMany chiefs and kings, including the Satavahanas who ruled over parts of western and central India (c. second century BCE-second century CE) and the Shakas, a people of Central Asian origin who established kingdoms in the north-western and western parts of the subcontinent, derived revenues from long-distance trade. Their social origins were often obscure, but, as we will see in the case of the Satavahanas (Chapter 3), once they acquired power they attempted to claim social status in a variety of ways.\n\nOne means of claiming high status was to identify with a variety of deities. This strategy is best exemplified by the Kushanas (c. first century BCE\u0002first century CE), who ruled over a vast kingdom extending from Central Asia to northwest India. Their history has been reconstructed from inscriptions and textual traditions. The notions of kingship they wished to project are perhaps best evidenced in their coins and sculpture.\n\nColossal statues of Kushana rulers have been found installed in a shrine at Mat near Mathura (Uttar Pradesh). Similar statues have been found in a shrine in Afghanistan as well. Some historians feel this indicates that the Kushanas considered themselves godlike. Many Kushana rulers also adopted the title devaputra, or \u201cson of god\u201d, possibly inspired by Chinese rulers who called themselves sons of heaven.\n\nBy the fourth century there is evidence of larger states, including the Gupta Empire. Many of these depended on samantas, men who maintained themselves through local resources including control over land. They offered homage and provided military support to rulers. Powerful samantas could become kings: conversely, weak rulers might find themselves being reduced to positions of subordination.\n\nHistories of the Gupta rulers have been reconstructed from literature, coins and inscriptions, including prashastis, composed in praise of kings in particular, and patrons in general, by poets. While historians often attempt to draw factual information from such compositions, those who composed and read them often treasured them as works of poetry rather than as accounts that were literally true. The Prayaga Prashasti (also known as the Allahabad Pillar Inscription) composed in Sanskrit by Harishena, the court poet of Samudragupta, arguably the most powerful of the Gupta rulers (c. fourth century CE), is a case in point.", "doc_id": "4cba8a94-4933-11ed-96d3-0242ac110007"} {"source": "NCERT XII History, India", "document": "Let us retrace our steps back to the urban centres that emerged in several parts of the subcontinent from c. sixth century BCE. As we have seen, many of these were capitals of mahajanapadas. Virtually all major towns were located along routes of communication. Some such as Pataliputra were on riverine routes. Others, such as Ujjayini, were along land routes, and yet others, such as Puhar, were near the coast, from where sea routes began. Many cities like Mathura were bustling centres of commercial, cultural and political activity.\n\nWe have seen that kings and ruling elites lived in fortified cities. Although it is difficult to conduct extensive excavations at most sites because people live in these areas even today (unlike the Harappan cities), a wide range of artefacts have been recovered from them. These include fine pottery bowls and dishes, with a glossy finish, known as Northern Black Polished Ware, probably used by rich people, and ornaments, tools, weapons, vessels, figurines, made of a wide range of materials \u2013 gold, silver, copper, bronze, ivory, glass, shell and terracotta.\n\nBy the second century BCE, we find short votive inscriptions in a number of cities. These mention the name of the donor, and sometimes specify his/her occupation as well. They tell us about people who lived in towns: washing folk, weavers, scribes, carpenters, potters, goldsmiths, blacksmiths, officials, religious teachers, merchants and kings. \n\nSometimes, guilds or shrenis, organisations of craft producers and merchants, are mentioned as well. These guilds probably procured raw materials, regulated production, and marketed the finished product. It is likely that craftspersons used a range of iron tools to meet the growing demands of urban elites.\n\nWhen historians began reconstructing early Indian history in the nineteenth century, the emergence of the Mauryan Empire was regarded as a major landmark. India was then under colonial rule, and was part of the British empire. Nineteenth and early twentieth century Indian historians found the possibility that there was an empire in early India both challenging and exciting. Also, some of the archaeological finds associated with the Mauryas, including stone sculpture, were considered to be examples of the spectacular art typical of empires. Many of these historians found the message on Asokan inscriptions very different from that of most other rulers, suggesting that Asoka was more powerful and industrious, as also more humble than later rulers who adopted grandiose titles. So it is not surprising that nationalist leaders in the twentieth century regarded him as an inspiring figure.\n\nYet, how important was the Mauryan Empire? It lasted for about 150 years, which is not a very long time in the vast span of the history of the subcontinent. Besides, if you look at Map 2, you will notice that the empire did not encompass the entire subcontinent. And even within the frontiers of the empire, control was not uniform. By the second century BCE, new chiefdoms and kingdoms emerged in several parts of the subcontinent.\n\nThe new kingdoms that emerged in the Deccan and further south, including the chiefdoms of the Cholas, Cheras and Pandyas in Tamilakam (the name of the ancient Tamil country, which included parts of present-day Andhra Pradesh and Kerala, in addition to Tamil Nadu), proved to be stable and prosperous.\n\nWe know about these states from a variety of sources. For instance, the early Tamil Sangam texts (see also Chapter 3) contain poems describing chiefs and the ways in which they acquired and distributed resources. \n\nMany chiefs and kings, including the Satavahanas who ruled over parts of western and central India (c. second century BCE-second century CE) and the Shakas, a people of Central Asian origin who established kingdoms in the north-western and western parts of the subcontinent, derived revenues from long-distance trade. Their social origins were often obscure, but, as we will see in the case of the Satavahanas (Chapter 3), once they acquired power they attempted to claim social status in a variety of ways.\n\nOne means of claiming high status was to identify with a variety of deities. This strategy is best exemplified by the Kushanas (c. first century BCE\u0002first century CE), who ruled over a vast kingdom extending from Central Asia to northwest India. Their history has been reconstructed from inscriptions and textual traditions. The notions of kingship they wished to project are perhaps best evidenced in their coins and sculpture.\n\nColossal statues of Kushana rulers have been found installed in a shrine at Mat near Mathura (Uttar Pradesh). Similar statues have been found in a shrine in Afghanistan as well. Some historians feel this indicates that the Kushanas considered themselves godlike. Many Kushana rulers also adopted the title devaputra, or \u201cson of god\u201d, possibly inspired by Chinese rulers who called themselves sons of heaven.\n\nBy the fourth century there is evidence of larger states, including the Gupta Empire. Many of these depended on samantas, men who maintained themselves through local resources including control over land. They offered homage and provided military support to rulers. Powerful samantas could become kings: conversely, weak rulers might find themselves being reduced to positions of subordination.\n\nHistories of the Gupta rulers have been reconstructed from literature, coins and inscriptions, including prashastis, composed in praise of kings in particular, and patrons in general, by poets. While historians often attempt to draw factual information from such compositions, those who composed and read them often treasured them as works of poetry rather than as accounts that were literally true. The Prayaga Prashasti (also known as the Allahabad Pillar Inscription) composed in Sanskrit by Harishena, the court poet of Samudragupta, arguably the most powerful of the Gupta rulers (c. fourth century CE), is a case in point.", "doc_id": "6ba564ce-4933-11ed-90b3-0242ac110007"} {"source": "NCERT XII History, India", "document": "The ruins at Hampi were brought to light in 1800 by an engineer and antiquarian named Colonel Colin Mackenzie. An employee of the English East India Company, he prepared the first survey map of the site. Much of the initial information he received was based on the memories of priests of the Virupaksha temple and the shrine of Pampadevi. Subsequently, from 1856, photographers began to record the monuments which enabled scholars to study them. As early as 1836 epigraphists began collecting several dozen inscriptions found at this and other temples at Hampi. In an effort to reconstruct the history of the city and the empire, historians collated information from these sources with accounts of foreign travellers and other literature written in Telugu, Kannada, Tamil and Sanskrit.\n\nAccording to tradition and epigraphic evidence two brothers, Harihara and Bukka, founded the Vijayanagara Empire in 1336. This empire included within its fluctuating frontiers peoples who spoke different languages and followed different religious traditions.\n\nOn their northern frontier, the Vijayanagara kings competed with contemporary rulers \u2013 including the Sultans of the Deccan and the Gajapati rulers of Orissa \u2013 for control of the fertile river valleys and the resources generated by lucrative overseas trade. At the same time, interaction between these states led to sharing of ideas, especially in the field of architecture. The rulers of Vijayanagara borrowed concepts and building techniques which they then developed further.\n\nSome of the areas that were incorporated within the empire had witnessed the development of powerful states such as those of the Cholas in Tamil Nadu and the Hoysalas in Karnataka. Ruling elites in these areas had extended patronage to elaborate temples such as the Brihadishvara temple at Thanjavur and the Chennakeshava temple at Belur. The rulers of Vijayanagara, who called themselves rayas, built on these traditions and carried them, as we will see, literally to new heights.\n\nThe most striking feature about the location of Vijayanagara is the natural basin formed by the river Tungabhadra which flows in a north-easterly direction. The surrounding landscape is characterised by stunning granite hills that seem to form a girdle around the city. A number of streams flow down to the river from these rocky outcrops.\n\nIn almost all cases embankments were built along these streams to create reservoirs of varying sizes. As this is one of the most arid zones of the peninsula, elaborate arrangements had to be made to store rainwater and conduct it to the city. The most important such tank was built in the early years of the fifteenth century and is now called Kamalapuram tank. Water from this tank not only irrigated fields nearby but was also conducted through a channel to the \u201croyal centre\u201d.\n\nOne of the most prominent waterworks to be seen among the ruins is the Hiriya canal. This canal drew water from a dam across the Tungabhadra and irrigated the cultivated valley that separated the \u201csacred centre\u201d from the \u201curban core\u201d. This was apparently built by kings of the Sangama dynasty.", "doc_id": "6058bfca-4939-11ed-a2aa-0242ac110007"} {"source": "NCERT XII History, India", "document": "The ruins at Hampi were brought to light in 1800 by an engineer and antiquarian named Colonel Colin Mackenzie. An employee of the English East India Company, he prepared the first survey map of the site. Much of the initial information he received was based on the memories of priests of the Virupaksha temple and the shrine of Pampadevi. Subsequently, from 1856, photographers began to record the monuments which enabled scholars to study them. As early as 1836 epigraphists began collecting several dozen inscriptions found at this and other temples at Hampi. In an effort to reconstruct the history of the city and the empire, historians collated information from these sources with accounts of foreign travellers and other literature written in Telugu, Kannada, Tamil and Sanskrit.\n\nAccording to tradition and epigraphic evidence two brothers, Harihara and Bukka, founded the Vijayanagara Empire in 1336. This empire included within its fluctuating frontiers peoples who spoke different languages and followed different religious traditions.\n\nOn their northern frontier, the Vijayanagara kings competed with contemporary rulers \u2013 including the Sultans of the Deccan and the Gajapati rulers of Orissa \u2013 for control of the fertile river valleys and the resources generated by lucrative overseas trade. At the same time, interaction between these states led to sharing of ideas, especially in the field of architecture. The rulers of Vijayanagara borrowed concepts and building techniques which they then developed further.\n\nSome of the areas that were incorporated within the empire had witnessed the development of powerful states such as those of the Cholas in Tamil Nadu and the Hoysalas in Karnataka. Ruling elites in these areas had extended patronage to elaborate temples such as the Brihadishvara temple at Thanjavur and the Chennakeshava temple at Belur. The rulers of Vijayanagara, who called themselves rayas, built on these traditions and carried them, as we will see, literally to new heights.\n\nThe most striking feature about the location of Vijayanagara is the natural basin formed by the river Tungabhadra which flows in a north-easterly direction. The surrounding landscape is characterised by stunning granite hills that seem to form a girdle around the city. A number of streams flow down to the river from these rocky outcrops.\n\nIn almost all cases embankments were built along these streams to create reservoirs of varying sizes. As this is one of the most arid zones of the peninsula, elaborate arrangements had to be made to store rainwater and conduct it to the city. The most important such tank was built in the early years of the fifteenth century and is now called Kamalapuram tank. Water from this tank not only irrigated fields nearby but was also conducted through a channel to the \u201croyal centre\u201d.\n\nOne of the most prominent waterworks to be seen among the ruins is the Hiriya canal. This canal drew water from a dam across the Tungabhadra and irrigated the cultivated valley that separated the \u201csacred centre\u201d from the \u201curban core\u201d. This was apparently built by kings of the Sangama dynasty.", "doc_id": "bc39d69e-4939-11ed-b274-0242ac110007"} {"source": "NCERT XII History, India", "document": "Chronicles commissioned by the Mughal emperors are an important source for studying the empire and its court. They were written in order to project a vision of an enlightened kingdom to all those who came under its umbrella. At the same time they were meant to convey to those who resisted the rule of the Mughals that all resistance was destined to fail. Also, the rulers wanted to ensure that there was an account of their rule for posterity.\n\nThe authors of Mughal chronicles were invariably courtiers. The histories they wrote focused on events centred on the ruler, his family, the court and nobles, wars and administrative arrangements. Their titles, such as the Akbar Nama, Shahjahan Nama, Alamgir Nama, that is, the story of Akbar, Shah Jahan and Alamgir (a title of the Mughal ruler Aurangzeb), suggest that in the eyes of their authors the history of the empire and the court was synonymous with that of the emperor.\n\nMughal court chronicles were written in Persian. Under the Sultans of Delhi it flourished as a language of the court and of literary writings, alongside north Indian languages, especially Hindavi and its regional variants. As the Mughals were Chaghtai Turks by origin, Turkish was their mother tongue. Their first ruler Babur wrote poetry and his memoirs in this language.\n\nIt was Akbar who consciously set out to make Persian the leading language of the Mughal court. Cultural and intellectual contacts with Iran, as well as a regular stream of Iranian and Central Asian migrants seeking positions at the Mughal court, might have motivated the emperor to adopt the language. Persian was elevated to a language of empire, conferring power and prestige on those who had a command of it. It was spoken by the king, the royal household and the elite at court. Further, it became the language of administration at all levels so that accountants, clerks and other functionaries also learnt it.\n\nEven when Persian was not directly used, its vocabulary and idiom heavily influenced the language of official records in Rajasthani and Marathi and even Tamil. Since the people using Persian in the sixteenth and seventeenth centuries came from many different regions of the subcontinent and spoke other Indian languages, Persian too became Indianised by absorbing local idioms. A new language, Urdu, sprang from the interaction of Persian with Hindavi. \n\nMughal chronicles such as the Akbar Nama were written in Persian, others, like Babur\u2019s memoirs, were translated from the Turkish into the Persian Babur Nama. Translations of Sanskrit texts such as the Mahabharata and the Ramayana into Persian were commissioned by the Mughal emperors. The Mahabharata was translated as the Razmnama (Book of Wars).\n\nAll books in Mughal India were manuscripts, that is, they were handwritten. The centre of manuscript production was the imperial kitabkhana. Although kitabkhana can be translated as library, it was a scriptorium, that is, a place where the emperor\u2019s collection of manuscripts was kept and new manuscripts were produced.\n\nThe creation of a manuscript involved a number of people performing a variety of tasks. Paper makers were needed to prepare the folios of the manuscript, scribes or calligraphers to copy the text, gilders to illuminate the pages, painters to illustrate scenes from the text, bookbinders to gather the individual folios and set them within ornamental covers. The finished manuscript was seen as a precious object, a work of intellectual wealth and beauty. It exemplified the power of its patron, the Mughal emperor, to bring such beauty into being.\n\nAt the same time some of the people involved in the actual production of the manuscript also got recognition in the form of titles and awards. Of these, calligraphers and painters held a high social standing while others, such as paper makers or bookbinders, have remained anonymous artisans. \n\nCalligraphy, the art of handwriting, was considered a skill of great importance. It was practised using different styles. Akbar\u2019s favourite was the nastaliq, a fluid style with long horizontal strokes. It is written using a piece of trimmed reed with a tip of five to 10 mm called qalam, dipped in carbon ink (siyahi). The nib of the qalam is usually split in the middle to facilitate the absorption of ink.", "doc_id": "3208fb22-493d-11ed-a6fc-0242ac110007"} {"source": "NCERT XII History, India", "document": "As you know, colonial rule was first established in Bengal. It is here that the earliest attempts were made to reorder rural society and establish a new regime of land rights and a new revenue system. Let us see what happened in Bengal in the early years of Company (E.I.C.) rule.\n\nThe Company had recognised the zamindars as important, but it wanted to control and regulate them, subdue their authority and restrict their autonomy. The zamindars\u2019 troops were disbanded, customs duties abolished, and their \u201ccutcheries\u201d (courts) brought under the supervision of a Collector appointed by the Company. Zamindars lost their power to organise local justice and the local police. Over time the collectorate emerged as an alternative centre of authority, severely restricting what the zamindar could do. In one case, when a raja failed to pay the revenue, a Company official was speedily dispatched to his zamindari with explicit instructions \u201cto take charge of the District and to use the most effectual means to destroy all the influence and the authority of the raja and his officers\u201d.\n\nAt the time of rent collection, an officer of the zamindar, usually the amlah, came around to the village. But rent collection was a perennial problem. Sometimes bad harvests and low prices made payment of dues difficult for the ryots. At other times ryots deliberately delayed payment. Rich ryots and village headmen \u2013 jotedars and mandals \u2013 were only too happy to see the zamindar in trouble. The zamindar could therefore not easily assert his power over them. Zamindars could prosecute defaulters, but the judicial process was long drawn. In Burdwan alone there were over 30,000 pending suits for arrears of rent payment in 1798.\n\nWhile many zamindars were facing a crisis at the end of the eighteenth century, a group of rich peasants were consolidating their position in the villages. In Francis Buchanan\u2019s survey of the Dinajpur district in North Bengal we have a vivid description of this class of rich peasants known as jotedars. By the early nineteenth century, jotedars had acquired vast areas of land \u2013 sometimes as much as several thousand acres. They controlled local trade as well as moneylending, exercising immense power over the poorer cultivators of the region. A large part of their land was cultivated through sharecroppers (adhiyars or bargadars) who brought their own ploughs, laboured in the field, and handed over half the produce to the jotedars after the harvest.\nWithin the villages, the power of jotedars was more effective than that of zamindars. Unlike zamindars who often lived in urban areas, jotedars were located in the villages and exercised direct control over a considerable section of poor villagers. They fiercely resisted efforts by zamindars to increase the jama of the village, prevented zamindari officials from executing their duties, mobilised ryots who were dependent on them, and deliberately delayed payments of revenue to the zamindar. In fact, when the estates of the zamindars were auctioned for failure to make revenue payment, jotedars were often amongst the purchasers.\n\nThe jotedars were most powerful in North Bengal, although rich peasants and village headmen were emerging as commanding figures in the countryside in other parts of Bengal as well. In some places they were called haoladars, elsewhere they were known as gantidars or mandals. Their rise inevitably weakened zamindari authority.\n\nThe life of the Paharias \u2013 as hunters, shifting cultivators, food gatherers, charcoal producers, silkworm rearers \u2013 was thus intimately connected to the forest. They lived in hutments within tamarind groves, and rested in the shade of mango trees. They considered the entire region as their land, the basis of their identity as well as survival; and they resisted the intrusion of outsiders. Their chiefs maintained the unity of the group, settled disputes, and led the tribe in battles with other tribes and plainspeople.\n\nWith their base in the hills, the Paharias regularly raided the plains where settled agriculturists lived. These raids were necessary for survival, particularly in years of scarcity; they were a way of asserting power over settled communities; and they were a means of negotiating political relations with outsiders. The zamindars on the plains had to often purchase peace by paying a regular tribute to the hill chiefs. Traders similarly gave a small amount to the hill folk for permission to use the passes controlled by them. Once the toll was paid, the Paharia chiefs protected the traders, ensuring that their goods were not plundered by anyone.", "doc_id": "486faa2a-4940-11ed-8059-0242ac110007"} {"source": "NCERT XII History, India", "document": "Within the villages, the power of jotedars was more effective than that of zamindars. Unlike zamindars who often lived in urban areas, jotedars were located in the villages and exercised direct control over a considerable section of poor villagers. They fiercely resisted efforts by zamindars to increase the jama of the village, prevented zamindari officials from executing their duties, mobilised ryots who were dependent on them, and deliberately delayed payments of revenue to the zamindar. In fact, when the estates of the zamindars were auctioned for failure to make revenue payment, jotedars were often amongst the purchasers.\n\nThe jotedars were most powerful in North Bengal, although rich peasants and village headmen were emerging as commanding figures in the countryside in other parts of Bengal as well. In some places they were called haoladars, elsewhere they were known as gantidars or mandals. Their rise inevitably weakened zamindari authority.\n\nThe life of the Paharias \u2013 as hunters, shifting cultivators, food gatherers, charcoal producers, silkworm rearers \u2013 was thus intimately connected to the forest. They lived in hutments within tamarind groves, and rested in the shade of mango trees. They considered the entire region as their land, the basis of their identity as well as survival; and they resisted the intrusion of outsiders. Their chiefs maintained the unity of the group, settled disputes, and led the tribe in battles with other tribes and plainspeople.\n\nWith their base in the hills, the Paharias regularly raided the plains where settled agriculturists lived. These raids were necessary for survival, particularly in years of scarcity; they were a way of asserting power over settled communities; and they were a means of negotiating political relations with outsiders. The zamindars on the plains had to often purchase peace by paying a regular tribute to the hill chiefs. Traders similarly gave a small amount to the hill folk for permission to use the passes controlled by them. Once the toll was paid, the Paharia chiefs protected the traders, ensuring that their goods were not plundered by anyone.\n\nSanthal myths and songs of the nineteenth century refer very frequently to a long history of travel: they represent the Santhal past as one of continuous mobility, a tireless search for a place to settle. Here in the Damin-i-Koh their journey seemed to have come to an end.\n\nWhen the Santhals settled on the peripheries of the Rajmahal hills, the Paharias resisted but were ultimately forced to withdraw deeper into the hills. Restricted from moving down to the lower hills and valleys, they were confined to the dry interior and to the more barren and rocky upper hills. This severely affected their lives, impoverishing them in the long term. Shifting agriculture depended on the ability to move to newer and newer land and utilisation of the natural fertility of the soil. When the most fertile soils became inaccessible to them, being part of the Damin, the Paharias could not effectively sustain their mode of cultivation. When the forests of the region were cleared for cultivation the hunters amongst them also faced problems. The Santhals, by contrast, gave up their earlier life of mobility and settled down, cultivating a range of commercial crops for the market, and dealing with traders and moneylenders.\n\nThe Santhals, however, soon found that the land they had brought under cultivation was slipping away from their hands. The state was levying heavy taxes on the land that the Santhals had cleared, moneylenders (dikus) were charging them high rates of interest and taking over the land when debts remained unpaid, and zamindars were asserting control over the Damin area.\n \nBy the 1850s, the Santhals felt that the time had come to rebel against zamindars, moneylenders and the colonial state, in order to create an ideal world for themselves where they would rule. It was after the Santhal Revolt (1855-56) that the Santhal Pargana was created, carving out 5,500 square miles from the districts of Bhagalpur and Birbhum. The colonial state hoped that by creating a new territory for the Santhals and imposing some special laws within it, the Santhals could be conciliated.", "doc_id": "b502a4bc-4940-11ed-aac6-0242ac110007"} {"source": "NCERT XII History, India", "document": "By the eighteenth century Madras, Calcutta and Bombay had become important ports. The settlements that came up here were convenient points for collecting goods. The English East India Company built its factories (i.e., mercantile offices) there and because of competition among the European companies, fortified these settlements for protection. In Madras, Fort St George, in Calcutta Fort William and in Bombay the Fort marked out the areas of British settlement. Indian merchants, artisans and other workers who had economic dealings with European merchants lived outside these forts in settlements of their own. Thus, from the beginning there were separate quarters for Europeans and Indians, which came to be labelled in contemporary writings as the \u201cWhite Town\u201d and \u201cBlack Town\u201d respectively. Once the British captured political power these racial distinctions became sharper.\n\nFrom the mid-nineteenth century the expanding network of railways linked these cities to the rest of the country. As a result the hinterland \u2013 the countryside from where raw materials and labour were drawn \u2013 became more closely linked to these port cities. Since raw material was transported to these cities for export and there was plentiful cheap labour available, it was convenient to set up modern factories there. After the 1850s, cotton mills were set up by Indian merchants and entrepreneurs in Bombay, and European-owned jute mills were established on the outskirts of Calcutta. This was the beginning of modern industrial development in India.\n\nAlthough Calcutta, Bombay and Madras supplied raw materials for industry in England, and had emerged because of modern economic forces like capitalism, their economies were not primarily based on factory production. The majority of the working population in these cities belonged to what economists classify as the tertiary sector. There were only two proper \u201cindustrial cities\u201d: Kanpur, specialising in leather, woollen and cotton textiles, and Jamshedpur, specialising in steel. India never became a modern industrialised country, since discriminatory colonial policies limited the levels of industrial development. Calcutta, Bombay and Madras grew into large cities, but this did not signify any dramatic economic growth for colonial India as a whole.\n\nColonial cities reflected the mercantile culture of the new rulers. Political power and patronage shifted from Indian rulers to the merchants of the East India Company. Indians who worked as interpreters, middlemen, traders and suppliers of goods also had an important place in these new cities. Economic activity near the river or the sea led to the development of docks and ghats. Along the shore were godowns, mercantile offices, insurance agencies for shipping, transport depots, banking establishments. Further inland were the chief administrative offices of the Company. The Writers\u2019 Building in Calcutta was one such office. Around the periphery of the Fort, European merchants and agents built palatial houses in European styles. Some built garden houses in the suburbs. Racially exclusive clubs, racecourses and theatres were also built for the ruling elite.\n\nThe rich Indian agents and middlemen built large traditional courtyard houses in the Black Town in the vicinity of the bazaars. They bought up large tracts of land in the city as future investment. To impress their English masters they threw lavish parties during festivals. They also built temples to establish their status in society. The labouring poor provided a variety of services to their European and Indian masters as cooks, palanquin bearers, coachmen, guards, porters and construction and dock workers. They lived in makeshift huts in different parts of the city.\n\nThe nature of the colonial city changed further in the mid-nineteenth century. After the Revolt of 1857 British attitudes in India were shaped by a constant fear of rebellion. They felt that towns needed to be better defended, and white people had to live in more secure and segregated enclaves, away from the threat of the \u201cnatives\u201d. Pasturelands and agricultural fields around the older towns were cleared, and new urban spaces called \u201cCivil Lines\u201d were set up. White people began to live in the Civil Lines. Cantonments\u2013 places where Indian troops under European command were stationed \u2013 were also developed as safe enclaves. These areas were separate from but attached to the Indian towns. With broad streets, bungalows set amidst large gardens, barracks, parade ground and church, they were meant as a safe haven for Europeans as well as a model of ordered urban life in contrast to the densely built- up Indian towns.\n\nFor the British, the \u201cBlack\u201d areas came to symbolise not only chaos and anarchy, but also filth and disease. For a long while the British were interested primarily in the cleanliness and hygiene of the \u201cWhite\u201d areas. But as epidemics of cholera and plague spread, killing thousands, colonial officials felt the need for more stringent measures of sanitation and public health. They feared that disease would spread from the \u201cBlack\u201d to the \u201cWhite\u201d areas. From the 1860s and 1870s, stringent administrative measures regarding sanitation were implemented and building activity in the Indian towns was regulated. Underground piped water supply and sewerage and drainage systems were also put in place around this time. Sanitary vigilance thus became another way of regulating Indian towns.", "doc_id": "a613c760-494f-11ed-b11c-0242ac110007"} {"source": "NCERT XII History, India", "document": "By the eighteenth century Madras, Calcutta and Bombay had become important ports. The settlements that came up here were convenient points for collecting goods. The English East India Company built its factories (i.e., mercantile offices) there and because of competition among the European companies, fortified these settlements for protection. In Madras, Fort St George, in Calcutta Fort William and in Bombay the Fort marked out the areas of British settlement. Indian merchants, artisans and other workers who had economic dealings with European merchants lived outside these forts in settlements of their own. Thus, from the beginning there were separate quarters for Europeans and Indians, which came to be labelled in contemporary writings as the \u201cWhite Town\u201d and \u201cBlack Town\u201d respectively. Once the British captured political power these racial distinctions became sharper.\n\nFrom the mid-nineteenth century the expanding network of railways linked these cities to the rest of the country. As a result the hinterland \u2013 the countryside from where raw materials and labour were drawn \u2013 became more closely linked to these port cities. Since raw material was transported to these cities for export and there was plentiful cheap labour available, it was convenient to set up modern factories there. After the 1850s, cotton mills were set up by Indian merchants and entrepreneurs in Bombay, and European-owned jute mills were established on the outskirts of Calcutta. This was the beginning of modern industrial development in India.\n\nAlthough Calcutta, Bombay and Madras supplied raw materials for industry in England, and had emerged because of modern economic forces like capitalism, their economies were not primarily based on factory production. The majority of the working population in these cities belonged to what economists classify as the tertiary sector. There were only two proper \u201cindustrial cities\u201d: Kanpur, specialising in leather, woollen and cotton textiles, and Jamshedpur, specialising in steel. India never became a modern industrialised country, since discriminatory colonial policies limited the levels of industrial development. Calcutta, Bombay and Madras grew into large cities, but this did not signify any dramatic economic growth for colonial India as a whole.\n\nColonial cities reflected the mercantile culture of the new rulers. Political power and patronage shifted from Indian rulers to the merchants of the East India Company. Indians who worked as interpreters, middlemen, traders and suppliers of goods also had an important place in these new cities. Economic activity near the river or the sea led to the development of docks and ghats. Along the shore were godowns, mercantile offices, insurance agencies for shipping, transport depots, banking establishments. Further inland were the chief administrative offices of the Company. The Writers\u2019 Building in Calcutta was one such office. Around the periphery of the Fort, European merchants and agents built palatial houses in European styles. Some built garden houses in the suburbs. Racially exclusive clubs, racecourses and theatres were also built for the ruling elite.\n\nThe rich Indian agents and middlemen built large traditional courtyard houses in the Black Town in the vicinity of the bazaars. They bought up large tracts of land in the city as future investment. To impress their English masters they threw lavish parties during festivals. They also built temples to establish their status in society. The labouring poor provided a variety of services to their European and Indian masters as cooks, palanquin bearers, coachmen, guards, porters and construction and dock workers. They lived in makeshift huts in different parts of the city.\n\nThe nature of the colonial city changed further in the mid-nineteenth century. After the Revolt of 1857 British attitudes in India were shaped by a constant fear of rebellion. They felt that towns needed to be better defended, and white people had to live in more secure and segregated enclaves, away from the threat of the \u201cnatives\u201d. Pasturelands and agricultural fields around the older towns were cleared, and new urban spaces called \u201cCivil Lines\u201d were set up. White people began to live in the Civil Lines. Cantonments\u2013 places where Indian troops under European command were stationed \u2013 were also developed as safe enclaves. These areas were separate from but attached to the Indian towns. With broad streets, bungalows set amidst large gardens, barracks, parade ground and church, they were meant as a safe haven for Europeans as well as a model of ordered urban life in contrast to the densely built- up Indian towns.\n\nFor the British, the \u201cBlack\u201d areas came to symbolise not only chaos and anarchy, but also filth and disease. For a long while the British were interested primarily in the cleanliness and hygiene of the \u201cWhite\u201d areas. But as epidemics of cholera and plague spread, killing thousands, colonial officials felt the need for more stringent measures of sanitation and public health. They feared that disease would spread from the \u201cBlack\u201d to the \u201cWhite\u201d areas. From the 1860s and 1870s, stringent administrative measures regarding sanitation were implemented and building activity in the Indian towns was regulated. Underground piped water supply and sewerage and drainage systems were also put in place around this time. Sanitary vigilance thus became another way of regulating Indian towns.", "doc_id": "b59962ee-494f-11ed-a996-0242ac110007"} {"source": "NCERT XII History, India", "document": "By the late nineteenth century, official intervention in the city became more stringent.\nGone were the days when town\nplanning was seen as a task to be\nshared by inhabitants and the government. Instead, the government took over all the initiatives for town planning including funding. This opportunity was used to clear more huts and develop the British portions of the town at the expense of other areas. The existing racial divide of the \u201cWhite Town\u201d and \u201cBlack Town\u201d was reinforced by the new divide of \u201chealthy\u201d and \u201cunhealthy\u201d. Indian representatives in the municipality protested against this unfair bias towards the development of the European parts of the town. Public protests against these government policies strengthened the feeling of anti-colonialism and nationalism among Indians.\n\nIf one way of realising this imperial vision was through town planning, the other was through embellishing cities with monumental buildings. Buildings in cities could include forts, government offices, educational institutions, religious structures, commemorative towers, commercial depots, or even docks and bridges. Although primarily serving functional needs like defence, administration and commerce these were rarely simple structures. They were often meant to represent ideas such as imperial power, nationalism and religious glory. Let us see how this is exemplified in the case of Bombay.\n\nBombay was initially seven islands. As the population grew, the islands were joined to create more space and they gradually fused into one big city. Bombay was the commercial capital of colonial India. As the premier port on the western coast it was the centre of international trade. By the end of the nineteenth century, half the imports and exports of India passed through Bombay. One important item of this trade was opium that the East India Company exported to China. Indian merchants and middlemen supplied and participated in this trade and they helped integrate Bombay\u2019s economy directly to Malwa, Rajasthan and Sind where opium was grown. This collaboration with the Company was profitable and led to the growth of an Indian capitalist class. Bombay\u2019s capitalists came from diverse communities such as Parsi, Marwari, Konkani Muslim, Gujarati Bania, Bohra, Jew and Armenian.\n\nAs you have read (Chapter 10), when the American Civil War started in 1861 cotton from the American South stopped coming into the international market. This led to an upsurge of demand for Indian cotton, grown primarily in the Deccan. Once again Indian merchants and middlemen found an opportunity for earning huge profits. In 1869 the Suez Canal was opened and this further strengthened Bombay\u2019s links with the world economy. The Bombay government and Indian merchants used this opportunity to declare Bombay Urbs Prima in Indis, a Latin phrase meaning the most important city of India. By the late nineteenth century Indian merchants in Bombay were investing their wealth in new ventures such as cotton mills. They also patronised building activity in the city.\n\nAs Bombay\u2019s economy grew, from the mid-nineteenth century there was a need to expand railways and shipping and develop the administrative structure. Many new buildings were constructed at this time. These buildings reflected the culture and confidence of the rulers. The architectural style was usually European. This importation of European styles reflected the imperial vision in several ways. First, it expressed the British desire to create a familiar landscape in an alien country, and thus to feel at home in the colony. Second, the British felt that European styles would best symbolise their superiority, authority and power. Third, they thought that buildings that looked European would mark out the difference and distance between the colonial masters and their Indian subjects.", "doc_id": "297789ba-4951-11ed-9415-0242ac110007"} {"source": "NCERT XII History, India", "document": "By 1922, Gandhiji had transformed Indian nationalism, thereby redeeming the promise he made in his BHU speech of February 1916. It was no longer a movement of professionals and intellectuals; now, hundreds of thousands of peasants, workers and artisans also participated in it. Many of them venerated Gandhiji, referring to him as their \u201cMahatma\u201d. They appreciated the fact that he dressed like them, lived like them, and spoke their language. Unlike other leaders he did not stand apart from the common folk, but empathised and even identified with them.\n\nThis identification was strikingly reflected in his dress: while other nationalist leaders dressed formally, wearing a Western suit or an Indian bandgala, Gandhiji went among the people in a simple dhoti or loincloth. Meanwhile, he spent part of each day working on the charkha (spinning wheel), and encouraged other nationalists to do likewise. The act of spinning allowed Gandhiji to break the boundaries that prevailed within the traditional caste system, between mental labour and manual labour.\n\nIn a fascinating study, the historian Shahid Amin has traced the image of Mahatma Gandhi among the peasants of eastern Uttar Pradesh, as conveyed by reports and rumours in the local press. When he travelled through the region in February 1921, Gandhiji was received by adoring crowds everywhere.\n\nWherever Gandhiji went, rumours spread of his miraculous powers. In some places it was said that he had been sent by the King to redress the grievances of the farmers, and that he had the power to overrule all local officials. In other places it was claimed that Gandhiji\u2019s power was superior to that of the English monarch, and that with his arrival the colonial rulers would flee the district. There were also stories reporting dire consequences for those who opposed him; rumours spread of how villagers who criticised Gandhiji found their houses mysteriously falling apart or their crops failing.\n\nKnown variously as \u201cGandhi baba\u201d, \u201cGandhi Maharaj\u201d, or simply as \u201cMahatma\u201d, Gandhiji appeared to the Indian peasant as a saviour, who would rescue them from high taxes and oppressive officials and restore dignity and autonomy to their lives. Gandhiji\u2019s appeal among the poor, and peasants in particular, was enhanced by his ascetic lifestyle, and by his shrewd use of symbols such as the dhoti and the charkha. Mahatma Gandhi was by caste a merchant, and by profession a lawyer; but his simple lifestyle and love of working with his hands allowed him to empathise more fully with the labouring poor and for them, in turn, to empathise with him. Where most other politicians talked down to them, Gandhiji appeared not just to look like them, but to understand them and relate to their lives.\n\nWhile Mahatma Gandhi\u2019s mass appeal was undoubtedly genuine \u2013 and in the context of Indian politics, without precedent \u2013 it must also be stressed that his success in broadening the basis of nationalism was based on careful organisation. New branches of the Congress were set up in various parts of India. A series of \u201cPraja Mandals\u201d were established to promote the nationalist creed in the princely states. Gandhiji encouraged the communication of the nationalist message in the mother tongue, rather than in the language of the rulers, English. Thus the provincial committees of the Congress were based on linguistic regions, rather than on the artificial boundaries of British India. In these different ways nationalism was taken to the farthest corners of the country and embraced by social groups previously untouched by it.", "doc_id": "99bf5444-4953-11ed-a20e-0242ac110007"} {"source": "NCERT XII History, India", "document": "By 1922, Gandhiji had transformed Indian nationalism, thereby redeeming the promise he made in his BHU speech of February 1916. It was no longer a movement of professionals and intellectuals; now, hundreds of thousands of peasants, workers and artisans also participated in it. Many of them venerated Gandhiji, referring to him as their \u201cMahatma\u201d. They appreciated the fact that he dressed like them, lived like them, and spoke their language. Unlike other leaders he did not stand apart from the common folk, but empathised and even identified with them.\n\nThis identification was strikingly reflected in his dress: while other nationalist leaders dressed formally, wearing a Western suit or an Indian bandgala, Gandhiji went among the people in a simple dhoti or loincloth. Meanwhile, he spent part of each day working on the charkha (spinning wheel), and encouraged other nationalists to do likewise. The act of spinning allowed Gandhiji to break the boundaries that prevailed within the traditional caste system, between mental labour and manual labour.\n\nIn a fascinating study, the historian Shahid Amin has traced the image of Mahatma Gandhi among the peasants of eastern Uttar Pradesh, as conveyed by reports and rumours in the local press. When he travelled through the region in February 1921, Gandhiji was received by adoring crowds everywhere.\n\nWherever Gandhiji went, rumours spread of his miraculous powers. In some places it was said that he had been sent by the King to redress the grievances of the farmers, and that he had the power to overrule all local officials. In other places it was claimed that Gandhiji\u2019s power was superior to that of the English monarch, and that with his arrival the colonial rulers would flee the district. There were also stories reporting dire consequences for those who opposed him; rumours spread of how villagers who criticised Gandhiji found their houses mysteriously falling apart or their crops failing.\n\nKnown variously as \u201cGandhi baba\u201d, \u201cGandhi Maharaj\u201d, or simply as \u201cMahatma\u201d, Gandhiji appeared to the Indian peasant as a saviour, who would rescue them from high taxes and oppressive officials and restore dignity and autonomy to their lives. Gandhiji\u2019s appeal among the poor, and peasants in particular, was enhanced by his ascetic lifestyle, and by his shrewd use of symbols such as the dhoti and the charkha. Mahatma Gandhi was by caste a merchant, and by profession a lawyer; but his simple lifestyle and love of working with his hands allowed him to empathise more fully with the labouring poor and for them, in turn, to empathise with him. Where most other politicians talked down to them, Gandhiji appeared not just to look like them, but to understand them and relate to their lives.\n\nWhile Mahatma Gandhi\u2019s mass appeal was undoubtedly genuine \u2013 and in the context of Indian politics, without precedent \u2013 it must also be stressed that his success in broadening the basis of nationalism was based on careful organisation. New branches of the Congress were set up in various parts of India. A series of \u201cPraja Mandals\u201d were established to promote the nationalist creed in the princely states. Gandhiji encouraged the communication of the nationalist message in the mother tongue, rather than in the language of the rulers, English. Thus the provincial committees of the Congress were based on linguistic regions, rather than on the artificial boundaries of British India. In these different ways nationalism was taken to the farthest corners of the country and embraced by social groups previously untouched by it.", "doc_id": "142a35c8-4954-11ed-b0da-0242ac110007"} {"source": "NCERT XII History, India", "document": "For several years after the Non-cooperation Movement ended, Mahatma Gandhi focused on his social reform work. In 1928, however, he began to think of re-entering politics. That year there was an all-India campaign in opposition to the all-White Simon Commission, sent from England to enquire into conditions in the colony. Gandhiji did not himself participate in this movement, although he gave it his blessings, as he also did to a peasant satyagraha in Bardoli in the same year.\n\nIn the end of December 1929, the Congress held its annual session in the city of Lahore. The meeting was significant for two things: the election of Jawaharlal Nehru as President, signifying the passing of the baton of leadership to the younger generation; and the proclamation of commitment to \u201cPurna Swaraj\u201d, or complete independence. Now the pace of politics picked up once more. On 26 January 1930, \u201cIndependence Day\u201d was observed, with the national flag being hoisted in different venues, and patriotic songs being sung. Gandhiji himself issued precise instructions as to how the day should be observed. \u201cIt would be good,\u201d he said, \u201cif the declaration [of Independence] is made by whole villages, whole cities even. It would be well if all the meetings were held at the identical minute in all the places.\u201d\n\nGandhiji suggested that the time of the meeting be advertised in the traditional way, by the beating of drums. The celebrations would begin with the hoisting of the national flag. The rest of the day would be spent \u201cin doing some constructive work, whether it is spinning, or service of \u2018untouchables\u2019, or reunion of Hindus and Mussalmans, or prohibition work, or even all these together, which is not impossible\u201d. Participants would take a pledge affirming that it was \u201cthe inalienable right of the Indian people, as of any other people, to have freedom and to enjoy the fruits of their toil\u201d, and that \u201cif any government deprives a people of these rights and oppresses them, the people have a further right to alter it or abolish it\u201d.\n\nSoon after the observance of this \u201cIndependence Day\u201d, Mahatma Gandhi announced that he would lead a march to break one of the most widely disliked laws in British India, which gave the state a monopoly in the manufacture and sale of salt. His picking on the salt monopoly was another illustration of Gandhiji\u2019s tactical wisdom. For in every Indian household, salt was indispensable; yet people were forbidden from making salt even for domestic use, compelling them to buy it from shops at a high price. The state monopoly over salt was deeply unpopular; by making it his target, Gandhiji hoped to mobilise a wider discontent against British rule.\n\nWhere most Indians understood the significance of Gandhiji\u2019s challenge, the British Raj apparently did not. Although Gandhiji had given advance notice of his \u201cSalt March\u201d to the Viceroy Lord Irwin, Irwin failed to grasp the significance of the action. On 12 March 1930, Gandhiji began walking from his ashram at Sabarmati towards the ocean. He reached his destination three weeks later, making a fistful of salt as he did and thereby making himself a criminal in the eyes of the law. Meanwhile, parallel salt marches were being conducted in other parts of the country.", "doc_id": "3095cfae-4956-11ed-8abf-0242ac110007"} {"source": "NCERT XII History, India", "document": "The Congress ministries also contributed to the widening rift. In the United Provinces, the party had rejected the Muslim League proposal for a coalition government partly because the League tended to support landlordism, which the Congress wished to abolish, although the party had not yet taken any concrete steps in that direction. Nor did the Congress achieve any substantial gains in the \u201cMuslim mass contact\u201d programme it launched. In the end, the secular and radical rhetoric of the Congress merely alarmed conservative Muslims and the Muslim landed elite, without winning over the Muslim masses.\n\nMoreover, while the leading Congress leaders in the late 1930s insisted more than ever before on the need for secularism, these ideas were by no means universally shared lower down in the party hierarchy, or even by all Congress ministers. Maulana Azad, an important Congress leader, pointed out in 1937 that members of the Congress were not allowed to join the League, yet Congressmen were active in the Hindu Mahasabha\u2013 at least in the Central Provinces (present-day Madhya Pradesh). Only in December 1938 did the Congress Working Committee declare that Congress members could not be members of the Mahasabha. Incidentally, this was also the period when the Hindu Mahasabha and the Rashtriya Swayamsevak Sangh (RSS) were gaining strength. The latter spread from its Nagpur base to the United Provinces, the Punjab, and other parts of the country in the 1930s. By 1940, the RSS had over 100,000 trained and highly disciplined cadres pledged to an ideology of Hindu nationalism, convinced that India was a land of the Hindus.\n\nThe Pakistan demand was formalised gradually. On 23 March 1940, the League moved a resolution demanding a measure of autonomy for the Muslim- majority areas of the subcontinent. This ambiguous resolution never mentioned partition or Pakistan. In fact Sikandar Hayat Khan, Punjab Premier and leader of the Unionist Party, who had drafted the resolution, declared in a Punjab assembly speech on 1 March 1941 that he was opposed to a Pakistan that would mean \u201cMuslim Raj here and Hindu Raj elsewhere ... If Pakistan means unalloyed Muslim Raj in the Punjab then I will have nothing to do with it.\u201d He reiterated his plea for a loose (united), confederation with considerable autonomy for the confederating units.\n\nThe origins of the Pakistan demand have also been traced back to the Urdu poet Mohammad Iqbal, the writer of \u201cSare Jahan Se Achha Hindustan Hamara\u201d. In his presidential address to the Muslim League in 1930, the poet spoke of a need for a \u201cNorth- West Indian Muslim state\u201d. Iqbal, however, was not visualising the emergence of a new country in that speech but a reorganisation of Muslim-majority areas in north-western India into an autonomous unit within a single, loosely structured Indian federation.\n\nWe have seen that the League itself was vague about its demand in 1940. There was a very short time \u2013 just seven years \u2013 between the first formal articulation of the demand for a measure of autonomy for the Muslim-majority areas of the subcontinent and Partition. No one knew what the creation of Pakistan meant, and how it might shape people\u2019s lives in the future. Many who migrated from their homelands in 1947 thought they would return as soon as peace prevailed again.\n\nInitially even Muslim leaders did not seriously raise the demand for Pakistan as a sovereign state. In the beginning Jinnah himself may have seen the Pakistan idea as a bargaining counter, useful for blocking possible British concessions to the Congress and gaining additional favours for the Muslims. The pressure of the Second World War on the British delayed negotiations for independence for some time. Nonetheless, it was the massive Quit India Movement which started in 1942, and persisted despite intense repression, that brought the British Raj to its knees and compelled its officials to open a dialogue with Indian parties regarding a possible transfer of power.\n\nIn March 1946 the British Cabinet sent a three- member mission to Delhi to examine the League\u2019s demand and to suggest a suitable political framework for a free India. The Cabinet Mission toured the country for three months and recommended a loose three-tier confederation. India was to remain united. It was to have a weak central government controlling only foreign affairs, defence and communications with the existing provincial assemblies being grouped into three sections while electing the constituent assembly: Section A for the Hindu- majority provinces, and Sections B and C for the Muslim-majority provinces of the north-west and the north-east (including Assam) respectively. The sections or groups of provinces would comprise various regional units. They would have the power to set up intermediate-level executives and legislatures of their own.", "doc_id": "57d840e4-4959-11ed-87c9-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "What is art? Art is the skillful and personal application of existing knowledge to achieve desired results. It can be acquired through study, observation and experience. Since art is concerned with personal application of knowledge some kind of ingenuity and creativity is required to practice the basic principles learnt. The basic features of an art are as follows:\n\n(i) Existence of theoretical knowledge: Art presupposes the existence of certain theoretical knowledge. Experts in their respective areas have derived certain basic principles which are applicable to a particular form of art. For example, literature on dancing, public speaking, acting or music is widely recognised. \n\n(ii) Personalised application: The use of this basic knowledge varies from individual to individual. Art, therefore, is a very personalised concept. For example, two dancers, two speakers, two actors, or two writers will always differ in demonstrating their art.\n\n(iii) Based on practice and creativity: All art is practical. Art involves the creative practice of existing theoretical knowledge. We know that all music is based on seven basic notes. However, what makes the composition of a musician unique or different is his use of these notes in a creative manner that is entirely his own interpretation.\n\nManagement can be said to be an art since it satisfies the following criteria:\n\n(i) A successful manager practices the art of management in the day-to-day job of managing an enterprise based on study, observation and experience. There is a lot of literature available in various areas of management like marketing, finance and human resources which the manager has to specialise in. There is existence of theoretical knowledge.\n\n(ii) There are various theories of management, as propounded by many management thinkers, which prescribe certain universal principles. A manager applies these scientific methods and body of knowledge to a given situation, an issue or a problem, in his own unique manner. A good manager works through a combination of practice, creativity, imagination, initiative and innovation. A manager achieves perfection after long practice. Students of management also apply these principles differently depending on how creative they are.\n\n(iii) A manager applies this acquired knowledge in a personalised and skillful manner in the light of the realities of a given situation. He is involved in the activities of the organisation, studies critical situations and formulates his own theories for use in a given situation. This gives rise to different styles of management. \n\nThe best managers are committed and dedicated individuals; highly trained and educated, with personal qualities such as ambition, self\u0002motivation, creativity and imagination, a desire for development of the self and the organisation they belong to. All management practices are based on the same set of principles; what distinguishes a successful manager from a less successful one is the ability to put these principles into practice.\n\nScience is a systematised body of knowledge that explains certain general truths or the operation of general laws. Based on the above features, we can say that management has some characteristics of science. \n\n(i) Management has a systematised body of knowledge. It has its own theory and principles that have developed over a period of time, but it also draws on other disciplines such as Economics, Sociology, Psychology and Mathematics. Like all other organised activity, management has its own vocabulary of terms and concepts. For example, all of us discuss sports like cricket and soccer using a common vocabulary. The players also use these terms to communicate with each other.", "doc_id": "0e6f7246-49f0-11ed-95a0-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "Discipline is the obedience to organisational rules and employment agreement which are necessary for the working of the organisation. According to Fayol, discipline requires good superiors at all levels, clear and fair agreements and judicious application of penalties. Suppose management and labour union have entered into an agreement whereby workers have agreed to put in extra hours without any additional payment to revive the company out of loss. In return the management has promised to increase wages of the workers when this mission is accomplished. Here discipline when applied would mean that the workers and management both honour their commitments without any prejudice towards one another.\n\nAccording to Fayol there should be one and only one boss for every individual employee. If an employee gets orders from two superiors at the same time the principle of unity of command is violated. The principle of unity of command states that each participant in a formal organisation should receive orders from and be responsible to only one superior. Fayol gave a lot of importance to this principle. He felt that if this principle is violated \u201cauthority is undermined, discipline is in jeopardy, order disturbed and stability threatened\u201d. The principle resembles military organisation. Dual subordination should be avoided. This is to prevent confusion regarding tasks to be done. Suppose a sales person is asked to clinch a deal with a buyer and is allowed to give 10% discount by the marketing manager. But finance department tells her/him not to offer more than 5% discount. Now there is no unity of command. This can be avoided if there is coordination between various departments.", "doc_id": "38ba3e86-49f6-11ed-a0ae-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "Taylor was an ardent supporter of standardisation. According to him scientific method should be used to analyse methods of production prevalent under the rule of thumb. The best practices can be kept and further refined to develop a standard which should be followed throughout the organisation. This can be done through work-study techniques which include time study, motion study, fatigue study and method study, and which are discussed further in this chapter. It may be pointed out that even the contemporary techniques of business process including reengineering, kaizen (continuous improvement) and benchmarking are aimed at standardising the work. \n\nStandardisation refers to the process of setting standards for every business activity; it can be standardisation of process, raw material, time, product, machinery, methods or working conditions. These standards are the benchmarks, which must be adhered to during production.\n\nSimplification aims at eliminating superfluous varieties, sizes and dimensions while standardisation implies devising new varieties instead of the existing ones. Simplification aims at eliminating unnecessary diversity of products. It results in savings of cost of labour, machines and tools. It implies reduced inventories, fuller utilisation of equipment and increasing turnover. \n\nMost large companies like Nokia, Toyota and Microsoft, etc. have successfully implemented standardisation and simplification. This is evident from their large share in their respective markets.\n\nMotion study refers to the study of movements like lifting, putting objects, sitting and changing positions, etc., which are undertaken while doing a typical job. Unnecessary movements are sought to be eliminated so that it takes less time to complete the job efficiently. For example, Taylor and his associate Frank Gailberth were able to reduce motions in brick layering from 18 to just 5. Taylor demonstrated that productivity increased to about four times by this process.\n\nOn close examination of body motions, for example, it is possible to find out: (i) Motions which are productive (ii) Motions which are incidental (e.g., going to stores) (iii) Motions which are unproductive.Taylor used stopwatches and various symbols and colours to identify different motions. Through motion studies, Taylor was able to design suitable equipment and tools to educate workers on their use. The results achieved by him were truly remarkable.\n\nTime Study determines the standard time taken to perform a well-defined job. Time measuring devices are used for each element of task. The standard time is fixed for the whole of the task by taking several readings. The method of time study will depend upon volume and frequency of the task, the cycle time of the operation and time measurement costs. The objective of time study is to determine the number of workers to be employed; frame suitable incentive schemes and determine labour costs.\n\nFor example, on the basis of several observations it is determined that standard time taken by the worker to make one cardboard box is 20 minutes. So in one hour she/he will make 3 boxes. Assuming that a worker has to put in 8 hours of work in a shift and deducting one hour for rest and lunch, it is determined that in 7 hours a worker makes 21 boxes @ 3 boxes per hour. Now this is the standard task a worker has to do. Wages can be decided accordingly.\n\nA person is bound to feel tired physically and mentally if she/he does not rest while working. The rest intervals will help one to regain stamina and work again with the same capacity. This will result in increased productivity. Fatigue study seeks to determine the amount and frequency of rest intervals in completing a task. For example, normally in a plant, work takes place in three shifts of eight hours each. Even in a single shift a worker has to be given some rest interval to take her/his lunch etc. If the work involves heavy manual labour then small pauses have to be frequently given to the worker so that she/he can recharge her/his energy level for optimum contribution. \n\nThere can be many causes for fatigue like long working hours, doing unsuitable work, having uncordial relations with the boss or bad working conditions etc. Such hindrances in good performance should be removed.", "doc_id": "8e100774-49f8-11ed-a9c8-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "Taylor was an ardent supporter of standardisation. According to him scientific method should be used to analyse methods of production prevalent under the rule of thumb. The best practices can be kept and further refined to develop a standard which should be followed throughout the organisation. This can be done through work-study techniques which include time study, motion study, fatigue study and method study, and which are discussed further in this chapter. It may be pointed out that even the contemporary techniques of business process including reengineering, kaizen (continuous improvement) and benchmarking are aimed at standardising the work. \n\nStandardisation refers to the process of setting standards for every business activity; it can be standardisation of process, raw material, time, product, machinery, methods or working conditions. These standards are the benchmarks, which must be adhered to during production.\n\nSimplification aims at eliminating superfluous varieties, sizes and dimensions while standardisation implies devising new varieties instead of the existing ones. Simplification aims at eliminating unnecessary diversity of products. It results in savings of cost of labour, machines and tools. It implies reduced inventories, fuller utilisation of equipment and increasing turnover. \n\nMost large companies like Nokia, Toyota and Microsoft, etc. have successfully implemented standardisation and simplification. This is evident from their large share in their respective markets.\n\nMotion study refers to the study of movements like lifting, putting objects, sitting and changing positions, etc., which are undertaken while doing a typical job. Unnecessary movements are sought to be eliminated so that it takes less time to complete the job efficiently. For example, Taylor and his associate Frank Gailberth were able to reduce motions in brick layering from 18 to just 5. Taylor demonstrated that productivity increased to about four times by this process.\n\nOn close examination of body motions, for example, it is possible to find out: (i) Motions which are productive (ii) Motions which are incidental (e.g., going to stores) (iii) Motions which are unproductive.Taylor used stopwatches and various symbols and colours to identify different motions. Through motion studies, Taylor was able to design suitable equipment and tools to educate workers on their use. The results achieved by him were truly remarkable.\n\nTime Study determines the standard time taken to perform a well-defined job. Time measuring devices are used for each element of task. The standard time is fixed for the whole of the task by taking several readings. The method of time study will depend upon volume and frequency of the task, the cycle time of the operation and time measurement costs. The objective of time study is to determine the number of workers to be employed; frame suitable incentive schemes and determine labour costs.\n\nFor example, on the basis of several observations it is determined that standard time taken by the worker to make one cardboard box is 20 minutes. So in one hour she/he will make 3 boxes. Assuming that a worker has to put in 8 hours of work in a shift and deducting one hour for rest and lunch, it is determined that in 7 hours a worker makes 21 boxes @ 3 boxes per hour. Now this is the standard task a worker has to do. Wages can be decided accordingly.\n\nA person is bound to feel tired physically and mentally if she/he does not rest while working. The rest intervals will help one to regain stamina and work again with the same capacity. This will result in increased productivity. Fatigue study seeks to determine the amount and frequency of rest intervals in completing a task. For example, normally in a plant, work takes place in three shifts of eight hours each. Even in a single shift a worker has to be given some rest interval to take her/his lunch etc. If the work involves heavy manual labour then small pauses have to be frequently given to the worker so that she/he can recharge her/his energy level for optimum contribution. \n\nThere can be many causes for fatigue like long working hours, doing unsuitable work, having uncordial relations with the boss or bad working conditions etc. Such hindrances in good performance should be removed.", "doc_id": "b507865e-49f8-11ed-b1f2-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "Taylor was an ardent supporter of standardisation. According to him scientific method should be used to analyse methods of production prevalent under the rule of thumb. The best practices can be kept and further refined to develop a standard which should be followed throughout the organisation. This can be done through work-study techniques which include time study, motion study, fatigue study and method study, and which are discussed further in this chapter. It may be pointed out that even the contemporary techniques of business process including reengineering, kaizen (continuous improvement) and benchmarking are aimed at standardising the work. \n\nStandardisation refers to the process of setting standards for every business activity; it can be standardisation of process, raw material, time, product, machinery, methods or working conditions. These standards are the benchmarks, which must be adhered to during production.\n\nSimplification aims at eliminating superfluous varieties, sizes and dimensions while standardisation implies devising new varieties instead of the existing ones. Simplification aims at eliminating unnecessary diversity of products. It results in savings of cost of labour, machines and tools. It implies reduced inventories, fuller utilisation of equipment and increasing turnover. \n\nMost large companies like Nokia, Toyota and Microsoft, etc. have successfully implemented standardisation and simplification. This is evident from their large share in their respective markets.\n\nMotion study refers to the study of movements like lifting, putting objects, sitting and changing positions, etc., which are undertaken while doing a typical job. Unnecessary movements are sought to be eliminated so that it takes less time to complete the job efficiently. For example, Taylor and his associate Frank Gailberth were able to reduce motions in brick layering from 18 to just 5. Taylor demonstrated that productivity increased to about four times by this process.\n\nOn close examination of body motions, for example, it is possible to find out: (i) Motions which are productive (ii) Motions which are incidental (e.g., going to stores) (iii) Motions which are unproductive.Taylor used stopwatches and various symbols and colours to identify different motions. Through motion studies, Taylor was able to design suitable equipment and tools to educate workers on their use. The results achieved by him were truly remarkable.\n\nTime Study determines the standard time taken to perform a well-defined job. Time measuring devices are used for each element of task. The standard time is fixed for the whole of the task by taking several readings. The method of time study will depend upon volume and frequency of the task, the cycle time of the operation and time measurement costs. The objective of time study is to determine the number of workers to be employed; frame suitable incentive schemes and determine labour costs.\n\nFor example, on the basis of several observations it is determined that standard time taken by the worker to make one cardboard box is 20 minutes. So in one hour she/he will make 3 boxes. Assuming that a worker has to put in 8 hours of work in a shift and deducting one hour for rest and lunch, it is determined that in 7 hours a worker makes 21 boxes @ 3 boxes per hour. Now this is the standard task a worker has to do. Wages can be decided accordingly.\n\nA person is bound to feel tired physically and mentally if she/he does not rest while working. The rest intervals will help one to regain stamina and work again with the same capacity. This will result in increased productivity. Fatigue study seeks to determine the amount and frequency of rest intervals in completing a task. For example, normally in a plant, work takes place in three shifts of eight hours each. Even in a single shift a worker has to be given some rest interval to take her/his lunch etc. If the work involves heavy manual labour then small pauses have to be frequently given to the worker so that she/he can recharge her/his energy level for optimum contribution. \n\nThere can be many causes for fatigue like long working hours, doing unsuitable work, having uncordial relations with the boss or bad working conditions etc. Such hindrances in good performance should be removed.", "doc_id": "de6ebb98-49f8-11ed-82c8-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "Scientific management refers to an important stream of one of the earlier schools of thought of management referred to as the \u2018Classical\u2019 school. The other two streams belonging to the classical school are Fayol\u2019s Administrative Theory and Max Weber\u2019s Bureaucracy. We will not be describing bureaucracy here. A discussion of Fayol\u2019s principles, however, will follow the discussion of scientific management. \n\nFredrick Winslow Taylor (March 20,1856 \u2013 March 21, 1915) was an American mechanical engineer who sought to improve industrial efficiency. In 1874, he became an apprentice mechanist, learning factory conditions at the grass roots level. He earned a degree in mechanical engineering. He was one of the intellectual leaders of the efficiency movement and was highly influential in reshaping the factory system of production. You must appreciate that he belonged to the era of the industrial revolution characterised by mass production. You must also appreciate that every new development takes some time to be perfected. Taylor\u2019s contribution must be seen in the light of the efforts made to perfect the factory system of production. \n\nTaylor thought that by scientifically analysing work, it would be possible to find \u2018one best way\u2019 to do it. He is most remembered for his time and motion studies. He would break a job into its component parts and measure each to the second.\n\nTaylor believed that contemporary management was amateurish and should be studied as a discipline. He also wanted that workers should cooperate with the management and thus there would be no need of trade unions. The best results would come from the partnership between a trained and qualified management and a cooperative and innovative workforce. Each side needed the other.\n\nHe is known for coinage of the term \u2018Scientific Management\u2019 in his article \u2018The Principles of Scientific Management\u2019 published in 1911. After being fired from Bethlehem Steel Company he wrote a book \u2018Shop floor\u2019 which sold well. He was selected to be the president of the American Society of Mechanical Engineers (ASME) from 1906 to 1907. He was a professor at Tuck School of Business at Dartmouth College founded in 1900. \n\nIn 1884 he became an executive at Midvale Steel Company by demonstrating his leadership abilities. He instructed his fellow workers to work in phases. He joined the Bethlehem Iron Company in 1898, which later became Bethlehem Steel Company. He was originally employed to introduce piece rate wage system. After setting up the wage system, he was given authority and more responsibilities in the company. Using his newfound resources he increased the staff and made Bethlehem a show place for inventive work. Unfortunately, the company was sold to another group and he was discharged.\n\nIn the development of classical school of management thought, Fayol\u2019s administrative theory provides an important link. While Taylor succeeded in revolutionising the working of factory shop-floor in terms of devising the best method, fair day\u2019s work, differential piece-rate system and functional foremanship; Henri Fayol explained what amounts to a managers work and what principles should be followed in doing this work. If workers\u2019 efficiency mattered in the factory system, so does the managerial efficiency. Fayol\u2019s contribution must be interpreted in terms of the impact that his writings had and continue to have improvement in managerial efficiencies. \n\nHenri Fayol (1841-1925) was a French management theorist whose theories concerning scientific organisation of labour were widely influential in the beginning of twentieth century. He graduated from the mining academy of St. Etienne in 1860 in mining engineering. The 19 year old engineer started at the mining company \u2018Compagnie de commentary-Fourchambean\u0002Decazeville, ultimately acting as its managing director from 1888 to 1918.\n\nHis theories deal with organisation of production in the context of a competitive enterprise that has to control its production costs. Fayol was the first to identify four functions of management \u2013 Planning, Organising, Directing and Controlling although his version was a bit different \u2013 Plan, Organise, Command, Coordinate and Control. According to Fayol, all activities of an industrial undertaking could be divided into: Technical; Commercial; Financial; Security; Accounting and Managerial. He also suggested that qualities a manager must possess should be \u2014 Physical, Moral, Education, Knowledge and experience. He believed that the number of management principles that might help to improve an organisation\u2019s operation is potentially limitless.\n\nBased largely on his own experience, he developed his concept of administration. The 14 principles of management propounded by him were discussed in detail in his book published in 1917, \u2018Administration industrielle et generale\u2019. It was published in English as \u2018General and Industrial Management\u2019 in 1949 and is widely considered a foundational work in classical management theory. For his contribution he is also known as the \u2018Father of General Management\u2019.", "doc_id": "e0c95ec2-49fb-11ed-b4b3-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "We have seen how planning is essential for business organisations. It is difficult to manage operations without formal planning. It is important for an organisation to move towards achieving goals. But we have often seen in our daily lives according to plan. Unforeseen events and changes, rise in costs and prices, environmental changes, government interventions, legal regulations, all affect our business plans. Plans then need to be modified. If we cannot adhere to our plans, then why do we plan at all? This is what we need to analyse. The major limitations of planning are given below:\n\n(i) Planning leads to rigidity: In an organisation, a well-defined plan is drawn up with specific goals to be achieved within a specific time frame. These plans then decide the future course of action and managers may not be in a position to change it. This kind of rigidity in plans may create difficulty. Managers need to be given some flexibility to be able to cope with the changed circumstances. Following a pre-decided plan,when circumstances have changed, may not turn out to be in the organisations interest.\n\n(ii) Planning may not work in a dynamic environment: The business environment is dynamic, nothing is constant. The environment consists of a number of dimensions, economic, political, physical, legal and social dimensions. The organisation has to constantly adapt itself to changes. It becomes difficult to accurately assess future trends in the environment if economic policies are modified or political conditions in the country are not stable or there is a natural calamity. Competition in the market can also upset financial plans, sales targets may have to be revised and, accordingly, cash budgets also need to be modified since they are based on sales figures. Planning cannot foresee everything and thus, there may be obstacles to effective planning.\n\n(iii) Planning reduces creativity: Planning is an activity which is done by the top management. Usually the rest of the members just implements these plans. As a consequence, middle management and other decision makers are neither allowed to deviate from plans nor are they permitted to act on their own. Thus, much of the initiative or creativity inherent in them also gets lost or reduced. Most of the time, employees do not even attempt to formulate plans. They only carry out orders. Thus, planning in a way reduces creativity since people tend to think along the same lines as others. There is nothing new or innovative.\n\n(iv) Planning involves huge costs: When plans are drawn up huge costs are involved in their formulation. These may be in terms of time and money for example, checking accuracy of facts may involve lot of time. Detailed plans require scientific calculations to ascertain facts and figures. The costs incurred sometimes may not justify the benefits derived from the plans. There are a number of incidental costs as well, like expenses on boardroom meetings, discussions with professional experts and preliminary investigations to find out the viability of the plan.\n\n(v) Planning is a time-consuming process: Sometimes plans to be drawn up take so much of time that there is not much time left for their implementation.\n\n(vi) Planning does not guarantee success: The success of an enterprise is possible only when plans are properly drawn up and implemented. Any plan needs to be translated into action or it becomes meaningless. Managers have a tendency to rely on previously tried and tested successful plans. It is not always true that just because a plan has worked before it will work again. Besides, there are so many other unknown factors to be considered. This kind of complacency and false sense of security may actually lead to failure instead of success. However, despite its limitations, planning is not a useless exercise. It is a tool to be used with caution. It provides a base for analysing future courses of action. But, it is not a solution to all problems.", "doc_id": "a8d179ea-49fc-11ed-8bd4-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "We have seen how planning is essential for business organisations. It is difficult to manage operations without formal planning. It is important for an organisation to move towards achieving goals. But we have often seen in our daily lives according to plan. Unforeseen events and changes, rise in costs and prices, environmental changes, government interventions, legal regulations, all affect our business plans. Plans then need to be modified. If we cannot adhere to our plans, then why do we plan at all? This is what we need to analyse. The major limitations of planning are given below:\n\n(i) Planning leads to rigidity: In an organisation, a well-defined plan is drawn up with specific goals to be achieved within a specific time frame. These plans then decide the future course of action and managers may not be in a position to change it. This kind of rigidity in plans may create difficulty. Managers need to be given some flexibility to be able to cope with the changed circumstances. Following a pre-decided plan,when circumstances have changed, may not turn out to be in the organisations interest.\n\n(ii) Planning may not work in a dynamic environment: The business environment is dynamic, nothing is constant. The environment consists of a number of dimensions, economic, political, physical, legal and social dimensions. The organisation has to constantly adapt itself to changes. It becomes difficult to accurately assess future trends in the environment if economic policies are modified or political conditions in the country are not stable or there is a natural calamity. Competition in the market can also upset financial plans, sales targets may have to be revised and, accordingly, cash budgets also need to be modified since they are based on sales figures. Planning cannot foresee everything and thus, there may be obstacles to effective planning.\n\n(iii) Planning reduces creativity: Planning is an activity which is done by the top management. Usually the rest of the members just implements these plans. As a consequence, middle management and other decision makers are neither allowed to deviate from plans nor are they permitted to act on their own. Thus, much of the initiative or creativity inherent in them also gets lost or reduced. Most of the time, employees do not even attempt to formulate plans. They only carry out orders. Thus, planning in a way reduces creativity since people tend to think along the same lines as others. There is nothing new or innovative.\n\n(iv) Planning involves huge costs: When plans are drawn up huge costs are involved in their formulation. These may be in terms of time and money for example, checking accuracy of facts may involve lot of time. Detailed plans require scientific calculations to ascertain facts and figures. The costs incurred sometimes may not justify the benefits derived from the plans. There are a number of incidental costs as well, like expenses on boardroom meetings, discussions with professional experts and preliminary investigations to find out the viability of the plan.\n\n(v) Planning is a time-consuming process: Sometimes plans to be drawn up take so much of time that there is not much time left for their implementation.\n\n(vi) Planning does not guarantee success: The success of an enterprise is possible only when plans are properly drawn up and implemented. Any plan needs to be translated into action or it becomes meaningless. Managers have a tendency to rely on previously tried and tested successful plans. It is not always true that just because a plan has worked before it will work again. Besides, there are so many other unknown factors to be considered. This kind of complacency and false sense of security may actually lead to failure instead of success. However, despite its limitations, planning is not a useless exercise. It is a tool to be used with caution. It provides a base for analysing future courses of action. But, it is not a solution to all problems.", "doc_id": "c28f753a-49fc-11ed-be92-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "We have seen how planning is essential for business organisations. It is difficult to manage operations without formal planning. It is important for an organisation to move towards achieving goals. But we have often seen in our daily lives according to plan. Unforeseen events and changes, rise in costs and prices, environmental changes, government interventions, legal regulations, all affect our business plans. Plans then need to be modified. If we cannot adhere to our plans, then why do we plan at all? This is what we need to analyse. The major limitations of planning are given below:\n\n(i) Planning leads to rigidity: In an organisation, a well-defined plan is drawn up with specific goals to be achieved within a specific time frame. These plans then decide the future course of action and managers may not be in a position to change it. This kind of rigidity in plans may create difficulty. Managers need to be given some flexibility to be able to cope with the changed circumstances. Following a pre-decided plan,when circumstances have changed, may not turn out to be in the organisations interest.\n\n(ii) Planning may not work in a dynamic environment: The business environment is dynamic, nothing is constant. The environment consists of a number of dimensions, economic, political, physical, legal and social dimensions. The organisation has to constantly adapt itself to changes. It becomes difficult to accurately assess future trends in the environment if economic policies are modified or political conditions in the country are not stable or there is a natural calamity. Competition in the market can also upset financial plans, sales targets may have to be revised and, accordingly, cash budgets also need to be modified since they are based on sales figures. Planning cannot foresee everything and thus, there may be obstacles to effective planning.\n\n(iii) Planning reduces creativity: Planning is an activity which is done by the top management. Usually the rest of the members just implements these plans. As a consequence, middle management and other decision makers are neither allowed to deviate from plans nor are they permitted to act on their own. Thus, much of the initiative or creativity inherent in them also gets lost or reduced. Most of the time, employees do not even attempt to formulate plans. They only carry out orders. Thus, planning in a way reduces creativity since people tend to think along the same lines as others. There is nothing new or innovative.\n\n(iv) Planning involves huge costs: When plans are drawn up huge costs are involved in their formulation. These may be in terms of time and money for example, checking accuracy of facts may involve lot of time. Detailed plans require scientific calculations to ascertain facts and figures. The costs incurred sometimes may not justify the benefits derived from the plans. There are a number of incidental costs as well, like expenses on boardroom meetings, discussions with professional experts and preliminary investigations to find out the viability of the plan.\n\n(v) Planning is a time-consuming process: Sometimes plans to be drawn up take so much of time that there is not much time left for their implementation.\n\n(vi) Planning does not guarantee success: The success of an enterprise is possible only when plans are properly drawn up and implemented. Any plan needs to be translated into action or it becomes meaningless. Managers have a tendency to rely on previously tried and tested successful plans. It is not always true that just because a plan has worked before it will work again. Besides, there are so many other unknown factors to be considered. This kind of complacency and false sense of security may actually lead to failure instead of success. However, despite its limitations, planning is not a useless exercise. It is a tool to be used with caution. It provides a base for analysing future courses of action. But, it is not a solution to all problems.", "doc_id": "fdf0a28e-49fc-11ed-b483-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "We have seen how planning is essential for business organisations. It is difficult to manage operations without formal planning. It is important for an organisation to move towards achieving goals. But we have often seen in our daily lives also, that things do not always go according to plan. Unforeseen events and changes, rise in costs and prices, environmental changes, government interventions, legal regulations, all affect our business plans. Plans then need to be modified. If we cannot adhere to our plans, then why do we plan at all? This is what we need to analyse.\n\nWhen plans are drawn up huge costs are involved in their formulation. These may be in terms of time and money for example, checking accuracy of facts may involve lot of time. Detailed plans require scientific calculations to ascertain facts and figures. The costs incurred sometimes may not justify the benefits derived from the plans. There are a number of incidental costs as well, like expenses on boardroom meetings, discussions with professional experts and preliminary investigations to find out the viability of the plan.\n\nThe success of an enterprise is possible only when plans are properly drawn up and implemented. Any plan needs to be translated into action or it becomes meaningless. Managers have a tendency to rely on previously tried and tested successful plans. It is not always true that just because a plan has worked before it will work again. Besides, there are so many other unknown factors to be considered. This kind of complacency and false sense of security may actually lead However, despite its limitations, planning is not a useless exercise. It is a tool to be used with caution. It provides a base for analysing future courses of action. But, it is not a solution to all problems.\n\nBy stating in advance how work is to be done planning provides direction for action. Planning ensures that the goals or objectives are clearly stated so that they act as a guide for deciding what action should be taken and in which direction. If goals are well defined, employees are aware of what the organisation has to do and what they must do to achieve those goals. Departments and individuals in the organisation are able to work in coordination. If there was no planning, employees would be working in different directions and the organisation would not be able to achieve its desired goals.\n\nPlanning is an activity which enables a manager to look ahead and anticipate changes. By deciding in advance the tasks to be performed, planning shows the way to deal with changes and uncertain events. Changes or events cannot be eliminated but they can be anticipated and managerial responses to them can be developed.\n\nPlanning serves as the basis of coordinating the activities and efforts of different divisions, departments and individuals. It helps in avoiding confusion and misunderstanding. Since planning ensures clarity in thought and action, work is carried on smoothly without interruptions. Useless and redundant activities are minimised or eliminated. It is easier to detect inefficiencies and take corrective measures to deal with them.\n\nSince planning is the first function of management, new ideas can take the shape of concrete plans. It is the most challenging activity for the management as it guides all future actions leading to growth and prosperity of the business.\n\nPlanning helps the manager to look into the future and make a choice from amongst various alternative courses of action. The manager has to evaluate each alternative and select the most viable proposition. Planning involves setting targets and predicting future conditions, thus helping in taking rational decisions.\n\nPlanning involves setting of goals. The entire managerial process is concerned with accomplishing predetermined goals through planning, organising, staffing, directing and controlling. Planning provides the goals or standards against which actual performance is measured. By comparing actual performance with some standard, managers can know whether they have actually been able to attain the goals. If there is any deviation it can be corrected. Therefore, we can say that planning is a prerequisite for controlling. If there were no goals and standards, then finding deviations which are a part of controlling would not be possible. The nature of corrective action required depends upon the extent of deviations from the standard. Therefore, planning provides the basis of control.", "doc_id": "19ff143a-4a00-11ed-afa0-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "Organising involves a series of steps that need to be taken in order to achieve the desired goal. Let us try and understand how organising is carried out with the help of an example.\n\nSuppose twelve students work for the school library in the summer vacations. One afternoon they are told to unload a shipment of new releases, stock the bookshelves, and then dispose of all waste (packaging, paper etc). If all the students decide to do it in their own way, it will result in mass confusion. However, if one student supervises the work by grouping students, dividing the work, assigning each group their quota and developing reporting relationships among them, the job will be done faster and in a better manner.From the above description, the following steps emerge in the process of organising:\n\n(i) Identification and division of work: The first step in the process of organising involves identifying and dividing the work that has to be done in accordance with previously determined plans. The work is divided into manageable activities so that duplication can be avoided and the burden of work can be shared among the employees. \n\n(ii) Departmentalisation: Once work has been divided into small and manageable activities then those activities which are similar in nature are grouped together. Such sets facilitate specialisation. This grouping process is called departmentalisation. Departments can be created using several criteria as a basis. Examples of some of the most popularly used basis are territory (north, south, west, etc.) and products (appliances, clothes, cosmetics etc).\n\n(iii) Assignment of duties: It is necessary to define the work of different job positions and accordingly allocate work to various employees. Once departments have been formed, each of them is placed under the charge of an individual. Jobs are then allocated to the members of each department in accordance to their skills and competencies. It is essential for effective performance that a proper match is made between the nature of a job and the ability of an individual. The work must be assigned to those who are best fitted to perform it well.\n\n(iv) Establishing authority and reporting relationships: Merely allocating work is not enough. Each individual should also know who he has to take orders from and to whom he is accountable. The establishment of such clear relationships helps to create a hierarchal structure and helps in coordination amongst various departments.\n\nPerformance of the organising function can pave the way for a smooth transition of the enterprise in accordance with the dynamic business environment. The significance of the organising function mainly arises from the fact that it helps in the survival and growth of an enterprise and equips it to meet various challenges. In order for any business enterprise to perform tasks and successfully meet goals, the organising function must be properly performed. The following points highlight the crucial role that organising plays in any business enterprise:\n\n(i) Benefits of specialisation: Organising leads to a systematic allocation of jobs amongst the work force. This reduces the workload as well as enhances productivity because of the specific workers performing a specific job on a regular basis. Repetitive performance of a particular task allows a worker to gain experience in that area and leads to specialisation.\n\n(ii) Clarity in working relation\u0002ships: The establishment of working relationships clarifies lines of communication and specifies who is to report to whom. This removes ambiguity in transfer of information and instructions. It helps in creating a hierarchical order thereby enabling the fixation of responsibility and specification of the extent of authority to be exercised by an individual. \n\n(iii) Optimum utilisation of resources: Organising leads to the proper usage of all material, financial and human resources. The proper assignment of jobs avoids overlapping of work and also makes possible the best use of resources. Avoidance of duplication of work helps in preventing confusion and minimising the wastage of resources and efforts. \n\n (iv) Adaptation to change: The process of organising allows a business enterprise to accommodate changes in the business environment. It allows the organisation structure to be suitably modified and the revision of inter-relationships amongst managerial levels to pave the way for a smooth transition. It also provides much needed stability to the enterprise as it can then continue to survive and grow inspite of changes.", "doc_id": "647131ae-4a02-11ed-8c0a-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "A manager, no matter how capable he is, cannot manage to do every task on his own. The volume of work makes it impractical for him to handle it all by himself. As a consequence, if he desires to meet the organisational goals, focus on objectives and ensure that all work is accomplished, he must delegate authority.\n\nDelegation refers to the downward transfer of authority from a superior to a subordinate. It is a pre-requisite to the efficient functioning of an organisation because it enables a manager to use his time on high priority activities. It also satisfies the subordinate\u2019s need for recognition and provides them with opportunities to develop and exercise initiative.\n\nDelegation helps a manager to extend his area of operations as without it, his activities would be restricted to only what he himself can do. However, delegation does not mean abdication. The manager shall still be accountable for the performance of the assigned tasks.\n\nMoreover, the authority granted to a subordinate can be taken back and redelegated to another person. Thus, irrespective of the extent of delegated authority ,the manager shall still be accountable to the same extent as before delegation.\n\nAccording to Louis Allen, delegation is the entrustment of responsibility and authority to another and the creation of accountability for performance.A detailed analysis of Louis Allen\u2019s definition brings to light the following essential elements of delegation:\n\n(i) Authority: Authority refers to the right of an individual to command his subordinates and to take action within the scope of his position. The concept of authority arises from the established scalar chain which links the various job positions and levels of an organisation. Authority also refers to the right to take decisions inherent in a managerial position to tell people what to do and expect them to do it. \n\nIn the formal organisation authority originates by virtue of an individual\u2019s position and the extent of authority is highest at the top management levels and reduces successively as we go down the corporate ladder. Thus, authority flows from top to bottom, i.e., the superior has authority over \nthe subordinate.\n\nAuthority relationships helps to maintain order in the organisation by giving the managers the right to exact obedience and give directions to the workforce under them.\n\nAuthority determines the superior subordinate relationship wherein the superior communicates his decision to the subordinate, expecting compliance from him and the subordinate executes the decision as per the guidelines of the superior. The extent to which a superior can exact compliance also depends on the personality of the superior.\n\nIt must be noted that authority is restricted by laws and the rules and regulation of the organisation, which limit its scope. However, as we go higher up in the management hierarchy, the scope of authority increases.\n\n(ii) Responsibility: Responsibility is the obligation of a subordinate to properly perform the assigned duty. It arises from a superior\u2013subordinate relationship because the subordinate is bound to perform the duty assigned to him by his superior. Thus, responsibility flows upwards, i.e., a subordinate will always be responsible to his superior.\n\nAn important consideration to be kept in view with respect to both authority and responsibility is that when an employee is given responsibility for a job he must also be given the degree of authority necessary to carry it out. Thus, for effective delegation the authority granted must be commensurate with the assigned responsibility. If authority granted is more than responsibility, it may lead to misuse of authority, and if responsibility assigned is more than authority it may make a person ineffective.\n\n(iii) Accountability: Delegation of authority, undoubtedly empowers an employee to act for his superior but the superior would still be accountable for the outcome:\n\nAccountability implies being answerable for the final outcome. Once authority has been delegated and responsibility accepted, one cannot deny accountability. It cannot be delegated and flows upwards, i.e., a subordinate will be accountable to a superior for satisfactory performance of work. It indicates that the manager has to ensure the proper discharge of duties by his subordinates. It is generally enforced through regular feedback on the extent of work accomplished. The subordinate will be expected to explain the consequences of his actions or omissions. ", "doc_id": "12099862-4a06-11ed-8917-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "(iii) Accountability: Delegation of authority, undoubtedly empowers an employee to act for his superior but the superior would still be accountable for the outcome:\n\nAccountability implies being answerable for the final outcome. Once authority has been delegated and responsibility accepted, one cannot deny accountability. It cannot be delegated and flows upwards, i.e., a subordinate will be accountable to a superior for satisfactory performance of work. It indicates that the manager has to ensure the proper discharge of duties by his subordinates. It is generally enforced through regular feedback on the extent of work accomplished. The subordinate will be expected to explain the consequences of his actions or omissions. \n\nDelegation ensures that the subordi\u0002nates perform tasks on behalf of the \nmanager thereby reducing his \nworkload and providing him with more \ntime to concentrate on important matters. Effective delegation leads to \nthe following benefits: \n\n(i) Effective management: By empowering the employees, the managers are able to function more efficiently as they get more time to concentrate on important matters. Freedom from doing routine work provides them with opportunities to excel in new areas.\n\n(ii) Employee development: As a result of delegation, employees get more opportunities to utilise their talent and this may give rise to latent abilities in them. It allows them to develop those skills which will enable them to perform complex tasks and assume those responsibilities which will improve their career prospects. It makes them better leaders and decision makers. Thus, delegation helps by preparing better future managers. Delegation empowers the employees by providing them with the chance to use their skills, gain experience and develop themselves for higher positions. \n\n(iii) Motivation o f employees : Delegation helps in developing the talents of the employees. It also has psychological benefits. When a superior entrusts a subordinate with a task, it is not merely the sharing of work but involves trust on the superior\u2019s part and commitment on the part of the subordinate. Responsibility for work builds the self-esteem of an employee and improves his confidence. He feels encouraged and tries to improve his performance further.\n\n(iv) Facilitation of growth: Delegation helps in the expansion of an organisation by providing a ready workforce to take up leading positions in new ventures. Trained and experienced employees are able to play significant roles in the launch of new projects by replicating the work ethos they have absorbed from existing units, in the newly set up branches.\n\n(v) Basis of management hierarchy: Delegation of authority establishes superior-subordinate relationships, which are the basis of hierarchy of management. It is the degree and flow of authority which determines who has to report to whom. The extent of delegated authority also decides the power that each job position enjoys in the organisation.\n\n(vi) Better coordination: The elements of delegation, namely authority, responsibility and accountability help to define the powers, duties and answerability related to the various positions in an organisation. This helps to avoid overlapping of duties and duplication of effort as it gives a clear picture of the work being done at various levels. Such clarity in reporting relationships help in developing and maintaining effective coordination amongst the departments, levels and functions of management.", "doc_id": "aa370318-4a06-11ed-9d41-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "In many organisations the top management plays an active role in taking all decisions while there are others in which this power is given to even the lower levels of management. Those organisations in which decision making authority lies with the top management are termed as centralised organisations whereas those in which such authority is shared with lower levels are decentralised organisations.\n\nDecentralisation explains the manner in which decision making responsibilities are divided among hierarchical levels. Put simply, decentralisation refers to delegation of authority throughout all the levels of the organisation. Decision making authority is shared with lower levels and is consequently placed nearest to the points of action. In other words decision making authority is pushed down the chain of command.\n\nWhen decisions taken by the lower levels are numerous as well as important an organisation can be regarded as greatly decentralised.\n\nAn organisation is centralised when decision-making authority is retained by higher management levels whereas it is decentralised when such authority is delegated.\n\nComplete centralisation would imply concentration of all decision making functions at the apex of the management hierarchy. Such a scenario would obviate the need for a management hierarchy. On the other hand, complete decentralisation would imply the delegation of all decision making functions to the lower level of the hierarchy and this would obviate the need for higher managerial positions. Both the scenarios are unrealistic.\n\nAn organisation can never be completely centralised or decent\u0002ralised. As it grows in size and comp\u0002lexity , there is a tendency to move towards decentralised decision making. This is because in large organisations those employees who are directly and closely involved with certain operations tend to have more knowledge about them than the top management which may only be indirectly associated with individual operations. \n\nHence, there is a need for a balance between these co-existing forces. Thus, it can be said that every organisation will be characterised by both centralisation and decentralisation.\n\nDecentralisation is much more than a mere transfer of authority to the lower levels of management hierarchy. It is a philosophy that implies selective dispersal of authority because it propagates the belief that people are competent, capable and resourceful. They can assume the responsibility for the effective implementation of their decisions. Thus this philosophy recognises the decision maker\u2019s need for autonomy. The management, however, needs to carefully select those decisions which will be pushed down to lower levels and those that will be retained for higher levels.\n\nAs a conclusion, it must be noted that in spite of its benefits decentralisation should be applied with caution as it can lead to organisational disintegration if the departments start to operate on their own guidelines which may be contrary to the interest of the organisation. Decentralisation must always be balanced with centralisation in areas of major policy decisions.", "doc_id": "c6a6f21a-4a3d-11ed-9eff-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "Operationally, understanding the manpower requirements would necessitate workload analysis on the one hand and workforce analysis on the other. Workload analysis would enable an assessment of the number and types of human resources necessary for the performance of various jobs and accomplishment of organisational objectives. Workforce analysis would reveal the number and type available. In fact such an exercise would reveal whether we are understaffed, overstaffed or optimally staffed. It may be pointed out that neither over-staffing nor under-staffing is a desirable situation. Can you think why? In fact this exercise would form the basis of the subsequent staffing actions.\n\nA situation of overstaffing somewhere would necessitate employee removal or transfer elsewhere. A situation of understaffing would necessitate the starting of the recruitment process. However, before that can be done, it is important to translate the manpower requirements into specific job description and the desirable profile of its occupant \u2014 the desired qualifications, experience, personality characteristics and so on. This information becomes the base for looking for potential employees.\n\nRecruitment may be defined as the process of searching for prospective employees and stimulating them to apply for jobs in the organisation. The information generated in the process of writing the job description and the candidate profile may be used for developing the \u2018situations vacant\u2019 advertisement. The advertisement may be displayed on the factory/office gate or else it may be got published in print media or flashed in electronic media. This step involves locating the potential candidate or determining the sources of potential candidates. In fact, there are a large number of recruitment avenues available to a firm which would be discussed latter when we talk about the various sources of recruitment. The essential objective is to create a pool of the prospective job candidates. Both internal and external sources of recruitment may be explored. Internal sources may be used to a limited extent. For fresh talent and wider choice external sources are used.\n\nSelection is the process of choosing from among the pool of the prospective job candidates developed at the stage of recruitment. Even in case of highly specialised jobs where the choice space is very narrow, the rigour of the selection process serves two important purposes: (i) it ensures that the organisation gets the best among the available, and (ii) it enhances the self-esteem and prestige of those selected and conveys to them the seriousness with which the things are done in the organisation. The rigour involves a host of tests and interviews, described later. Those who are able to successfully negotiate the test and the interviews are offered an employment contract, a written document containing the offer of employment, the terms and conditions and the date of joining.\n\nJoining a job marks the beginning of socialisation of the employee at the workplace. The employee is given a brief presentation about the company and is introduced to his superiors, subordinates and the colleagues. He is taken around the workplace and given the charge of the job for which he has been selected. This process of familiarisation is very crucial and may have a lasting impact on his decision to stay and on his job performance. Orientation is, thus, introducing the selected employee to other employees and familiarising him with the rules and policies of the organisation. Placement refers to the employee occupying the position or post for which the person has been selected.", "doc_id": "f4b65f6e-4a3e-11ed-83ad-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "In a new enterprise, the staffing function follows the planning and organising functions. After deciding what is to be done, how it is to be done and after creation of the organisation structure, the management is in a position to know the human resource requirements of the enterprise at different levels. Once the number and types of personnel to be selected is determined, management starts with the activities relating to recruiting, selecting and training people, to fulfill the requirements of the enterprise. In an existing enterprise, staffing is a continuous process because new jobs may be created and some of the existing employees may leave the organisation.\n\nIn any organisation, there is a need for people to perform work. The staffing function of management fulfills this requirement and finds the right people for the right job. Basically, staffing fills the positions as shown in the organisation structure. Human resources are the foundation of any business. The right people can help you take your business to the top; the wrong people can break your business. Hence, staffing is the most fundamental and critical drive of organisational performance. The staffing function has assumed greater importance these days because of rapid advancement of technology, increasing size of organisation and complicated behaviour of human beings. Human resources are the most important asset of an organisation. The ability of an organisation to achieve its goal depends upon the quality of its human resources. Therefore, staffing is a very important managerial function. No organisation can be successful unless it can fill and keep filled the various positions provided for in the structure with the right kind of people.\n\nProper staffing ensures the following benefits to the organisation:\n(i) helps in discovering and obtaining competent personnel for various jobs;\n\n(ii) makes for higher performance, by putting right person on the right job;\n\n(iii) ensures the continuous survival and growth of the enterprise through the succession planning for managers;\n\n(iv) helps to ensure optimum utilization of the human resources. By avoiding overmanning, it prevents under -utilisation of personnel and high labour costs. At the same time it avoids disruption of work by indicating in advance the shortages of personnel; and\n\n(v) improves job satisfaction and morale of employees through objective assessment and fair reward for their contribution.\n\nStaffing function must be performed efficiently by all organisations. If right kind of employees are not available, it will lead to wastage of materials, time, effort and energy, resulting in lower productivity and poor quality of products. The enterprise will not be able to sell its products profitably. It is, therefore, essential that right kind of people must be available in right number at the right time. They should be given adequate training so that wastage is minimum. They must also be induced to show higher productivity and quality by offering them proper incentives.", "doc_id": "081795ee-4a41-11ed-872c-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "The object of recruitment is to attract potential employees with the necessary characteristics or qualification, in the adequate number for the jobs available. It locates available people for the job and invites them to apply for the job in the organisation. The process of recruitment precedes the process of selection of a right candidate for the given positions in the organisation. Recruitment seeks to attract suitable applicants to apply for available jobs. The various activities involved with the process of recruitment includes (a) identification of the different sources of labour supply, (b) assessment of their validity, (c) choosing the most suitable source or sources, and (d) inviting applications from the prospective candidates, for the vacancies.\n\nThe requisite positions may be filled up from within the organisation or from outside. Thus, there are two sources of recruitment \u2013 Internal and External.\n\nThere are two important sources of internal recruitment, namely, transfers and promotions, which are discussed below:\n\n(i) Transfers: It involves shifting of an employee from one job to another, one department to another or from one shift to another, without a substantive change in the responsibilities and status of the employee. It may lead to changes in duties and responsibilities, working condition etc., but not necessarily salary. Transfer is a good source of filling the vacancies with employees from over-staffed departments. It is practically a horizontal movement of employees. Shortage of suitable personnel in one branch may be filled through transfer from other branch or department. Job transfers are also helpful in avoiding termination and in removing individual problems and grievances. At the time of transfer, it should be ensured that the employee to be transferred to another job is capable of performing it. Transfers can also be used for training of employees for learning different jobs.\n\n(ii) Promotions: Business enterprises generally follow the practice of filling higher jobs by promoting employees from lower jobs. Promotion leads to shifting an employee to a higher position, carrying higher responsibilities, facilities, status and pay. Promotion is a vertical shifting of employees. This practice helps to improve the motivation, loyalty and satisfaction level of employees. It has a great psycho logical impact over the employees because a promotion at the higher level may lead to a chain of promotions at lower levels in the organisation.\n\nFilling vacancies in higher jobs from within the organisation or through internal transfers has the following merits:\n\n(i) Employees are motivated to improve their performance. A promotion at a higher level may lead to a chain of promotion at lower levels in the organisation. This motivates the employees to improve their performance through learning and practice. Employees work with commitment and loyalty and remain satisfied with their jobs. Also peace prevails in the enterprise because of promotional avenues;\n\n(ii) Internal recruitment also simpli\u0002fies the process of selection and placement. The candidates that are already working in the enterprise can be evaluated more accurately and economically. This is a more reliable way of recruitment since the candidates are already known to the organisation; \n\n(iii) Transfer is a tool of training the employees to prepare them for higher jobs. Also people recruited from within the organisation do not need induction training;\n\n(iv) Transfer has the benefit of shifting workforce from the surplus departments to those where there is shortage of staff;\n\n (v) Filling of jobs internally is cheaper as compared to getting candidates from external sources.\n\nThe limitations of using internal sources of recruitment are as follows:\n\n(i) When vacancies are filled through internal promotions, the scope for induction of fresh talent is reduced. Hence, complete reliance on internal recruitment involves danger of \u2018inbreeding\u2019 by stopping \u2018infusion of new blood\u2019 into the organisation;\n\n(ii) The employees may become lethargic if they are sure of time\u0002bound promotions;\n\n(iii) A new enterprise cannot use internal sources of recruitment. No organisation can fill all its vacancies from internal sources;\n\n(iv) The spirit of competition among the employees may be hampered; and\n\n(v) Frequent transfers of employees may often reduce the productivity of the organisation.\n", "doc_id": "6287c5b6-4a42-11ed-96d6-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "It is generally observed that managers face several problems due to communication breakdowns or barriers. These barriers may prevent a communication or filter part of it or carry incorrect meaning due to which misunderstandings may be created. Therefore, it is important for a manager to identity such barriers and take measures to overcome them.\n\nThe barriers to communication in the organisations can be broadly grouped as: semantic barriers, psychological barriers, organisational barriers, and personal barriers. These are briefly discussed below:\n\nSemantic barriers: Semantics is the branch of linguistics dealing with the meaning of words and sentences. Semantic barriers are concerned with problems and obstructions in the process of encoding and decoding of message into words or impressions. Normally, such barriers result on account of use of wrong words, faulty translations, different interpretations, etc. These are discussed below:\n\n(i) Badly expressed message: Some times intended meaning may not be conveyed by a manager to his subordinates. These badly expressed messages may be an account of inadequate vocabulary, usage of wrong words, omission of needed words, etc.\n\n (ii) Symbols with different meanings: A word may have several meanings. Receiver has to perceive one such meaning for the word used by communicator. For example, consider these three sentences where the work \u2018value\u2019 is used:\n(a) What is the value of this ring? \n(b) I value our friendship.\n(c) What is the value of learning computer skills? \nYou will find that the \u2018value\u2019 gives different meaning in different contexts. Wrong perception leads to communication problems.\n\n(iii) Faulty translations: Sometimes the communications originally drafted in one language (e.g., English) need to be translated to the language understandable to workers (e.g., Hindi). If the translator is not proficient with both the languages, mistakes may creep in causing different meanings to the communication.\n\n(iv) Unclarified assumptions: Some communications may have certain assumptions which are subject to different interpretations. For example, a boss may instruct his subordinate, \u201cTake care of our guest\u201d. Boss may mean that subordinate should take care of transport, food, accommodation of the guest until he leaves the place. The subordinate may interpret that guest should be taken to hotel with care. Actually, the guest suffers due to these unclarified assumptions.\n\n(v) Technical jargon: It is usually found that specialists use technical jargon while explaining to persons who are not specialists in the concerned field. Therefore, they may not understand the actual meaning of many such words.\n\n(vi) Body language and gesture decoding: Every movement of body communicates some meaning. The body movement and gestures of communicator matters so much in conveying the message. If there is no match between what is said and what is expressed in body movements, communications may be wrongly perceived.", "doc_id": "cca666b8-4a43-11ed-8001-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "Importance of Motivation: In the example of Tata Steel you have seen how the direction, motivation and effective leadership has taken the company forward. Even communication systems in the company have encouraged employees to achieve targets.\n\nMotivation is considered important because it helps to identify and satisfy the needs of human resources in the organisation and thereby helps in improving their performance. It is the reason why every major organisation develops various kinds of motivational programmes and spends crores of rupees on these programmes. The importance of motivation can be pointed out by the following benefits:\n\n(i) Motivation helps to improve performance levels of employees as well as the organisation. Since proper motivation satisfies the needs of employees, they in turn devote all their energies for optimum performance in their work. A satisfied employee can always turnout expected performance. Good motivation in the organisation helps to achieve higher levels of performance as motivated employees contribute their maximum efforts for organisational goals.\n\n(ii) Motivation helps to change negative or indifferent attitudes of employee to positive attitudes so as to achieve organisational goals. For example, a worker may have indifferent or negative attitude towards his work, if he is not rewarded properly. If suitable rewards are given and supervisor gives positive encouragement and praise for the good work done, the worker may slowly develop positive attitude towards the work.\n\n(iii) Motivation helps to reduce employee turnover and thereby saves the cost of new recruitment and training. The main reason for high rate of employee turnover is lack of motivation. If managers identify motivational needs of employees and provide suitable incentives, employees may not think of leaving the organisation. High rate of turnover compels management to go for new recruitment and training which involve additional investment of money, time and effort. Motivation helps to save such costs. It also helps to retain talented people in the organisation.\n\nMaslow\u2019s Need Hierarchy Theory of Motivation: Since motivation is highly complex, many researchers have studied about motivation from several dimensions and developed some theories. These theories help to develop understanding about motivation phenomenon. Among these, Maslow\u2019s Need Hierarchy Theory is considered fundamental to understanding of motivation. Let us examine it in detail.\n\nAbraham Maslow, a well-known psychologist in a classic paper published in 1943, outlined the elements of an overall theory of motivation.His theory was based on human needs. He felt that within every human being, there exists a hierarchy of five needs. These are:\n\n(i) Basic Physiological Needs: These needs are most basic in the hierarchy and corresponds to primary needs. Hunger, thirst, shelter, sleep and sex are some examples of these needs. In the organisational context, basic salary helps to satisfy these needs.\n\n(ii) Safety/Security Needs: These needs provide security and protection from physical and emotional harm. Examples: job security, stability of income, Pension plans etc.,\n\n(iii) Affiliation/Belonging Needs: These needs refer to affection, sense of belongingness, acceptance and friendship.\n\n(iv) Esteem Needs: These include factors such as self-respect, autonomy status, recognition and attention.\n\n(v) Self Actualisation Needs: It is the highest level of need in the hierarchy. It refers to the drive to become what one is capable of becoming. These needs include \ngrowth, self-fulfillment and achievement of goals. \n\nMaslow\u2019s theory is based on the following assumptions:\n\n(i) People\u2019s behaviour is based on their needs. Satisfaction of such needs influences their behaviour.\n\n(ii) People\u2019s needs are in hierarchical order, starting from basic needs to other higher level needs.\n\n(iii) A satisfied need can no longer motivate a person; only next higher level need can motivate him. \n\n(iv) A person moves to the next higher level of the hierarchy only when the lower need is satisfied.\n\nMaslow\u2019s Theory focuses on the needs as the basis for motivation. This theory is widely recognised and appreciated. However, some of his propositions are questioned on his classification of needs and hierarchy of needs. But, despite such criticism, the theory is still relevant because needs, no matter how they are classified, are important to understand the behaviour. It helps managers to realise that need level of employee should be identified to provide motivation to them.", "doc_id": "d5cc32ba-4a46-11ed-b34b-0242ac110007"} {"source": "NCERT XII Business Studies, India", "document": "Planning and controlling are inseparable twins of management. A system of control presupposes the existence of certain standards. These standards of performance which serve as the basis of controlling are provided by planning. Once a plan becomes operational, controlling is necessary to monitor the progress, measure it, discover deviations and initiate corrective measures to ensure that events conform to plans. Thus, planning without controlling is meaningless. Similarly, controlling is blind without planning. If the standards are not set in advance, managers have nothing to control. When there is no plan, there is no basis of controlling. \n\nPlanning is clearly a prerequisite for controlling. It is utterly foolish to think that controlling could be accomplished without planning. Without planning there is no predetermined understanding of the desired performance. Planning seeks consistent, integrated and articulated programmes while controlling seeks to compel events to conform to plans.\n\nPlanning is basically an intellectual process involving thinking, articulation and analysis to discover and prescribe an appropriate course of action for achieving objectives. Controlling, on the other hand, checks whether decisions have been translated into desired action. Planning is thus, prescriptive whereas, controlling is evaluative.\n\nIt is often said that planning is looking ahead while controlling is looking back. However, the statement is only partially correct. Plans are prepared for future and are based on forecasts about future conditions. Therefore, planning involves looking ahead and is called a forward-looking function. On the contrary, controlling is like a postmortem of past activities to find out deviations from the standards. In that sense, controlling is a backward\u0002looking function. However, it should be understood that planning is guided by past experiences and the corrective action initiated by control function aims to improve future performance. Thus, planning and controlling are both backward-looking as well as a forward-looking function.\n\nThus, planning and controlling are interrelated and, in fact, reinforce each other in the sense that\n1. Planning based on facts makes controlling easier and effective; and\n2. Controlling improves future planning by providing information derived from past experience.\n\nControlling is a systematic process involving the following steps.\n1. Setting performance standards\n2. Measurement of actual performance\n3. Comparison of actual performance with standards\n4. Analysing deviations\n5. Taking corrective action\n\nStep 1: Setting Performance Standards: The first step in the controlling process is setting up of performance standards. Standards are the criteria against which actual performance would be measured. Thus, standards serve as benchmarks towards which an organisation strives to work.\n\nStandards can be set in both quantitative as well as qualitative terms. For instance, standards set in terms of cost to be incurred, revenue to be earned, product units to be produced and sold, time to be spent in performing a task, all represents quantitative standards. Sometimes standards may also be set in qualitative terms. Improving goodwill and motivation level of employees are examples of qualitative standards. The table in the next page gives a glimpse of standards used in different functional areas of business to gauge performance.\n\nAt the time of setting standards, a manager should try to set standards in precise quantitative terms as this would make their comparison with actual performance much easier. For instance, reduction of defects from 10 in every 1,000 pieces produced to 5 in every 1,000 pieces produced by the end of the quarter. However, whenever qualitative standards are set, an effort must be made to define them in a manner that would make their measurement easier. For instance, for improving customer satisfaction in a fast food chain having self-service, standards can be set in terms of time taken by a customer to wait for a table, time taken by him to place the order and time taken to collect the order.\n\nIt is important that standards should be flexible enough to be modified whenever required. Due to changes taking place in the internal and external business environment, standards may need some modification to be realistic in the changed business environment.\n\nStep 2: Measurement of Actual Performance: Once performance standards are set, the next step is measurement of actual performance. Performance should be measured in an objective and reliable manner. There are several techniques for measurement of performance. These include personal observation, sample checking, performance reports, etc. As far as possible, performance should be measured in the same units in which standards are set as this would make their comparison easier.", "doc_id": "0965104a-4a49-11ed-914c-0242ac110007"} {"source": "NCERT XII Geography, India", "document": "The process of adaptation, adjustment with and modification of the environment started with the appearance of human beings over the surface of the earth in different ecological niches. Thus, if we imagine the beginning of human geography with the interaction of environment and human beings, it has its roots deep in history. Thus, the concerns of human geography have a long temporal continuum though the approaches to articulate them have changed over time. This dynamism in approaches and thrusts shows the vibrant nature of the discipline. Earlier there was little interaction between different societies and the knowledge about each other was limited. Travellers and explorers used to disseminate information about the areas of their visits. Navigational skills were not developed and voyages were fraught with dangers. The late fifteenth century witnessed attempts of explorations in Europe and slowly the myths and mysteries about countries and people started to open up. The colonial period provided impetus to further explorations in order to access the resources of the regions and to obtain inventorized information. The intention here is not to present an in-depth historical account but to make you aware of the processes of steady development of human geography. The summarised Table 1.1 will introduce you to the broad stages and the thrust of human geography as a sub-field of geography.\n\nHuman geography, as you have seen, attempts to explain the relationship between all elements of human life and the space they occur over. Thus, human geography assumes a highly inter-disciplinary nature. It develops close interface with other sister disciplines in social sciences in order to understand and explain human elements on the surface of the earth. With the expansion of knowledge, new sub\u0002fields emerge and it has also happened to human geography. Let us examine these fields and sub-fields of Human Geography (Table 1.2). You would have noticed that the list is large and comprehensive. It reflects the expanding realm of human geography. The boundaries between sub-fields often overlap. What follows in this book in the form of chapters will provide you a fairly widespread coverage of different aspects of human geography. The exercises, the activities and the case studies will provide you with some empirical instances so as to have a better understanding of its subject matter.", "doc_id": "ada945f2-4a50-11ed-bcf1-0242ac110007"} {"source": "NCERT XII Geography, India", "document": "I. Geographical Factors\n(i) Availability of water: Water is the most important factor for life. So, people prefer to live in areas where fresh water is easily available. Water is used for drinking, bathing and cooking \u2013 and also for cattle, crops, industries and navigation. It is because of this that river valleys are among the most densely populated areas of the world.\n\n(ii) Landforms: People prefer living on flat plains and gentle slopes. This is because such areas are favourable for the production of crops and to build roads and industries. The mountainous and hilly areas hinder the development of transport network and hence initially do not favour agricultural and industrial development. So, these areas tend to be less populated. The Ganga plains are among the most densely populated areas of the world while the mountains zones in the Himalayas are scarcely populated.\n\n(iii) Climate: An extreme climate such as very hot or cold deserts are uncomfortable for human habitation. Areas with a comfortable climate, where there is not much seasonal variation attract more people. Areas with very heavy rainfall or extreme and harsh climates have low population. Mediterranean regions were inhabited from early periods in history due to their pleasant climate.\n\n(iv) Soils: Fertile soils are important for agricultural and allied activities. Therefore, areas which have fertile loamy soils have more people living on them as these can support intensive agriculture. Can you name some areas in India which are thinly populated due to poor soils?\n\nII. Economic Factors\n(i) Minerals: Areas with mineral deposits attract industries. Mining and industrial activities generate employment. So, skilled and semi\u2013skilled workers move to these areas and make them densely populated. Katanga Zambia copper belt in Africa is one such good example.\n\n(ii) Urbanisation: Cities offer better employment opportunities, educational and medical facilities, better means of transport and communication. Good civic amenities and the attraction of city life draw people to the cities. It leads to rural to urban migration and cities grow in size. Mega cities of the world continue to attract large number of migrants every year.\n\n(iii) Industrialisation: Industrial belts provide job opportunities and attract large numbers of people. These include not just factory workers but also transport operators, shopkeepers, bank employees, doctors, teachers and other service providers. The Kobe-Osaka region of Japan is thickly populated because of the presence of a number of industries.", "doc_id": "99e44f20-4a51-11ed-9a65-0242ac110007"} {"source": "NCERT XII Geography, India", "document": "Each unit of land has limited capacity to support people living on it. Hence, it is necessary to understand the ratio between the numbers of people to the size of land. This ratio is the density of population. It is usually measured in persons per sq km.\n\nI. Geographical Factors\n(i) Availability of water: Water is the most important factor for life. So, people prefer to live in areas where fresh water is easily available. Water is used for drinking, bathing and cooking \u2013 and also for cattle, crops, industries and navigation. It is because of this that river valleys are among the most densely populated areas of the world.\n\n(ii) Landforms: People prefer living on flat plains and gentle slopes. This is because such areas are favourable for the production of crops and to build roads and industries. The mountainous and hilly areas hinder the development of transport network and hence initially do not favour agricultural and industrial development. So, these areas tend to be less populated. The Ganga plains are among the most densely populated areas of the world while the mountains zones in the Himalayas are scarcely populated.\n\n(iii) Climate: An extreme climate such as very hot or cold deserts are uncomfortable for human habitation. Areas with a comfortable climate, where there is not much seasonal variation attract more people. Areas with very heavy rainfall or extreme and harsh climates have low population. Mediterranean regions were inhabited from early periods in history due to their pleasant climate.\n\n(iv) Soils: Fertile soils are important for agricultural and allied activities. Therefore, areas which have fertile loamy soils have more people living on them as these can support intensive agriculture. Can you name some areas in India which are thinly populated due to poor soils?\n\nII. Economic Factors\n(i) Minerals: Areas with mineral deposits attract industries. Mining and industrial activities generate employment. So, skilled and semi\u2013skilled workers move to these areas and make them densely populated. Katanga Zambia copper belt in Africa is one such good example.\n\n(ii) Urbanisation: Cities offer better employment opportunities, educational and medical facilities, better means of transport and communication. Good civic amenities and the attraction of city life draw people to the cities. It leads to rural to urban migration and cities grow in size. Mega cities of the world continue to attract large number of migrants every year.\n\n(iii) Industrialisation: Industrial belts provide job opportunities and attract large numbers of people. These include not just factory workers but also transport operators, shopkeepers, bank employees, doctors, teachers and other service providers. The Kobe-Osaka region of Japan is thickly populated because of the presence of a number of industries.\n\nIII. Social and Cultural Factors\nSome places attract more people because they have religious or cultural significance. In the same way \u2013 people tend to move away from places where there is social and political unrest. Many a times governments offer incentives to people to live in sparsely populated areas or move away from overcrowded places.", "doc_id": "8b068980-4a53-11ed-b7bc-0242ac110007"} {"source": "NCERT XII Geography, India", "document": "A small increase in population is desirable in a growing economy. However, population growth beyond a certain level leads to problems. Of these the depletion of resources is the most serious. Population decline is also a matter of concern. It indicates that resources that had supported a population earlier are now insufficient to maintain the population.\n\nThe deadly HIV/AIDS epidemics in Africa and some parts of the Commonwealth of Independent States (CIS) and Asia have pushed up death rates and reduced average life expectancy. This has slowed down population growth.\n\nDemographic transition theory can be used to describe and predict the future population of any area. The theory tells us that population of any region changes from high births and high deaths to low births and low deaths as society progresses from rural agrarian and illiterate to urban industrial and literate society. These changes occur in stages which are collectively known as the demographic cycle.\n\nThe first stage has high fertility and high mortality because people reproduce more to compensate for the deaths due to epidemics and variable food supply. The population growth is slow and most of the people are engaged in agriculture where large families are an asset. Life expectancy is low, people are mostly illiterate and have low levels of technology. Two hundred years ago all the countries of the world were in this stage. \n\nFertility remains high in the beginning of second stage but it declines with time. This is accompanied by reduced mortality rate. Improvements in sanitation and health conditions lead to decline in mortality. Because of this gap the net addition to population is high.\n\nIn the last stage, both fertility and mortality decline considerably. The population is either stable or grows slowly. The population becomes urbanised, literate and has high technical know\u0002how and deliberately controls the family size. This shows that human beings are extremely flexible and are able to adjust their fertility.\n\nIn the present day, different countries are at different stages of demographic transition.\n\nFamily planning is the spacing or preventing the birth of children. Access to family planning services is a significant factor in limiting population growth and improving women\u2019s health. Propaganda, free availability of contraceptives and tax disincentives for large families are some of the measures which can help population control.\n\nThomas Malthus in his theory (1798) stated that the number of people would increase faster than the food supply. Any further increase would result in a population crash caused by famine, disease and war. The preventive checks are better than the physical checks. For the sustainability of our resources, the world will have to control the rapid population increase.", "doc_id": "62ae785c-4a54-11ed-8c97-0242ac110007"} {"source": "NCERT XII Geography, India", "document": "The population growth or population change refers to the change in number of inhabitants of a territory during a specific period of time. This change may be positive as well as negative. It can be expressed either in terms of absolute numbers or in terms of percentage. Population change in an area is an important indicator of economic development, social upliftment and historical and cultural background of the region.\n\nThere are three components of population change \u2013 births, deaths and migration. The crude birth rate (CBR) is expressed as number of live births in a year per thousand of population.\n\nDeath rate plays an active role in population change. Population growth occurs not only by increasing births rate but also due to decreasing death rate. Crude Death Rate (CDR) is a simple method of measuring mortality of any area. CDR is expressed in terms of number of deaths in a particular year per thousand of population in a particular region.\n\nBy and large mortality rates are affected by the region\u2019s demographic structure, social advancement and levels of its economic development.\n\nApart from birth and death there is another way by which the population size changes. When people move from one place to another, the place they move from is called the Place of Origin and the place they move to is called the Place of Destination. The place of origin shows a decrease in population while the population increases in the place of destination. Migration may be interpreted as a spontaneous effort to achieve a better balance between population and resources.", "doc_id": "ef42fcd4-4a54-11ed-80f5-0242ac110007"} {"source": "NCERT XII Geography, India", "document": "The population growth or population change refers to the change in number of inhabitants of a territory during a specific period of time. This change may be positive as well as negative. It can be expressed either in terms of absolute numbers or in terms of percentage. Population change in an area is an important indicator of economic development, social upliftment and historical and cultural background of the region.\n\nThere are three components of population change \u2013 births, deaths and migration. The crude birth rate (CBR) is expressed as number of live births in a year per thousand of population.\n\nDeath rate plays an active role in population change. Population growth occurs not only by increasing births rate but also due to decreasing death rate. Crude Death Rate (CDR) is a simple method of measuring mortality of any area. CDR is expressed in terms of number of deaths in a particular year per thousand of population in a particular region.\n\nBy and large mortality rates are affected by the region\u2019s demographic structure, social advancement and levels of its economic development.\n\nApart from birth and death there is another way by which the population size changes. When people move from one place to another, the place they move from is called the Place of Origin and the place they move to is called the Place of Destination. The place of origin shows a decrease in population while the population increases in the place of destination. Migration may be interpreted as a spontaneous effort to achieve a better balance between population and resources.", "doc_id": "0ce47dbc-4a55-11ed-99ba-0242ac110007"} {"source": "NCERT XII Geography, India", "document": "In general, Asia has a low sex ratio. Countries like China, India, Saudi Arabia, Pakistan, Afghanistan have a lower sex ratio. On the other extreme is greater part of Europe (including Russia) where males are in minority. A deficit of males in the populations of many European countries is attributed to better status of women, and an excessively male-dominated out-migration to different parts of the world in the past.\n\nAge structure represents the number of people of different age groups. This is an important indicator of population composition, since a large size of population in the age group of 15-59 indicates a large working population. A greater proportion of population above 60 years represents an ageing population which requires more expenditure on health care facilities. Similarly high proportion of young population would mean that the region has a high birth rate and the population is youthful.\n\nThe age-sex structure of a population refers to the number of females and males in different age groups. A population pyramid is used to show the age-sex structure of the population.\n\nThe shape of the population pyramid reflects the characteristics of the population. The left side shows the percentage of males while the right side shows the percentage of women in each age group.\n\nThe age-sex pyramid of Nigeria as you can see is a triangular shaped pyramid with a wide base and is typical of less developed countries. These have larger populations in lower age groups due to high birth rates. If you construct the pyramids for Bangladesh and Mexico, it would look the same.\n\nAustralia\u2019s age-sex pyramid is bell shaped and tapered towards the top. This shows birth and death rates are almost equal leading to a near constant population.\n\nThe Japan pyramid has a narrow base and a tapered top showing low birth and death rates. The population growth in developed countries is usually zero or negative.", "doc_id": "5330decc-4a56-11ed-b46f-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "Among the most famous theories of demography is the one associated with the\nEnglish political economist Thomas Robert Malthus (1766-1834). Malthus\u2019s theory of population growth \u2013 outlined in his Essay on Population (1798) \u2013 was a rather pessimistic one. He argued that human populations tend to grow at a much faster rate than the rate at which the means of human subsistence (specially food, but also clothing and other agriculture-based products) can grow. Therefore humanity is condemned to live in poverty forever because the growth of agricultural production will always be overtaken by population growth. While population rises in geometric progression (i.e., like 2, 4, 8, 16, 32 etc.), agricultural production can only grow in arithmetic progression (i.e., like 2, 4, 6, 8, 10 etc.). Because population growth always outstrips growth in production of subsistence resources, the only way to increase prosperity is by controlling the growth of population. Unfortunately, humanity has only a limited ability to voluntarily reduce the growth of its population (through \u2018preventive checks\u2019 such as postponing marriage or practicing sexual abstinence or celibacy). Malthus believed therefore that \u2018positive checks\u2019 to population growth \u2013 in the form of famines and diseases \u2013 were inevitable because they were nature\u2019s way of dealing with the imbalance between food supply and increasing population.\n\nMalthus\u2019s theory was influential for a long time. But it was also challenged by theorists who claimed that economic growth could outstrip population growth. However, the most effective refutation of his theory was provided by the historical experience of European countries. The pattern of population growth began to change in the latter half of nineteenth century, and by the end of the first quarter of the twentieth century these changes were quite dramatic. Birth rates had declined, and outbreaks of epidemic diseases were being controlled. Malthus\u2019s predictions were proved false because both food production and standards of living continued to rise despite the rapid growth of population. \n\nMalthus was also criticised by liberal and Marxist scholars for asserting that poverty was caused by population growth. The critics argued that problems like poverty and starvation were caused by the unequal distribution of economic resources rather than by population growth. An unjust social system allowed a wealthy and privileged minority to live in luxury while the vast majority of the people were forced to live in poverty.\n\nAnother significant theory in demography is the theory of demographic transition. This suggests that population growth is linked to overall levels of economic development and that every society follows a typical pattern of development\u0002related population growth. There are three basic stages of population growth. The first stage is that of low population growth in a society that is underdeveloped and technologically backward. Growth rates are low because both the death rate and the birth rate are very high, so that the difference between the two (or the net growth rate) is low. The third (and last) stage is also one of low growth in a developed society where both death rate and birth rate have been reduced considerably and the difference between them is again small. Between these two stages is a transitional stage of movement from a backward to an advanced stage, and this stage is characterised by very high rates of growth of population. \n\nThis \u2018population explosion\u2019 happens because death rates are brought down relatively quickly through advanced methods of disease control, public health, and better nutrition. However, it takes longer for society to adjust to change and alter its reproductive behaviour (which was evolved during the period of poverty and high death rates) to suit the new situation of relative prosperity and longer life spans. This kind of transition was effected in Western Europe during the late nineteenth and early twentieth century. More or less similar patterns are followed in the less developed countries that are struggling to reduce the birth rate in keeping with the falling mortality rate. In India too, the demographic transition is not yet complete as the mortality rate has been reduced but the birth rate has not been brought down to the same extent.", "doc_id": "69ff4e44-4a5c-11ed-b17d-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "The dependency ratio is a measure comparing the portion of a population which is composed of dependents (i.e., elderly people who are too old to work, and children who are too young to work) with the portion that is in the working age group, generally defined as 15 to 64 years. The dependency ratio is equal to the population below 15 or above 64, divided by population in the 15-64 age group. This is usually expressed as a percentage. A rising dependency ratio is a cause for worry in countries that are facing an ageing population, since it becomes difficult for a relatively smaller proportion of working-age people to carry the burden of providing for a relatively larger proportion of dependents. On the other hand, a falling dependency ratio can be a source of economic growth and prosperity due to the larger proportion of workers relative to non-workers. This is sometimes refered to as the \u2018demographic dividend\u2019, or benefit flowing from the changing age structure. However, this benefit is temporary because the larger pool of working age people will eventually turn into non-working old people.\n\nIndia is the second most populous country in the world after China, with a total population of 121 crores (or 1.21 billion) according to the Census of India 2011. As can be seen from Table 1, the growth rate of India\u2019s population has not always been very high. Between 1901\u20131951 the average annual growth rate did not exceed 1.33%, a modest rate of growth. In fact between 1911 and 1921 there was a negative rate of growth of \u2013 0.03%. This was because of the influenza epidemic during 1918\u201319 which killed about 12.5 million persons or 5% of the total population of the country (Visaria and Visaria 2003: 191). The growth rate of population substantially increased after independence from British rule going up to 2.2% during 1961-1981. Since then although the annual growth rate has decreased it remains one of the highest in the developing world. Chart 1 shows the comparative movement of the crude birth and death rates. The impact of the demographic transition phase is clearly seen in the graph where they begin to diverge from each other after the decade of 1921 to 1931. Before 1931, both death rates and birth rates were high, whereas, after this transitional moment the death rates fell sharply but the birth rate only fell slightly.\n\nThe principal reasons for the decline in the death rate after 1921 were increased levels of control over famines and epidemic diseases. The latter cause was perhaps the most important. The major epidemic diseases in the past were fevers of various sorts, plague, smallpox and cholera. But the single biggest epidemic was the influenza epidemic of 1918-19, which killed as many as 170 lakh people, or about 5% of the total population of India at that time. (Estimates of deaths vary, and some are much higher. Also known as \u2018Spanish Flu\u2019, the influenza pandemic was a global phenomenon. Improvements in medical cures for these diseases, programmes for mass vaccination, and efforts to improve sanitation helped to control epidemics. However, diseases like malaria, tuberculosis, diarrhoea and dysentery continue to kill people even today, although the numbers are nowhere as high as they used to be in the epidemics of the past. Surat witnessed a small epidemic of plague in September 1994, while dengue and chikungunya epidemics are since reported in various parts of the country.\n\nFamines were also a major and recurring source of increased mortality. Famines were caused by high levels of continuing poverty and malnutrition in an agroclimatic environment that was very vulnerable to variations in rainfall. Lack of adequate means of transportation and communication as well as inadequate efforts on the part of the state were some of the factors responsible for famines. However, as scholars like Amartya Sen and others have shown, famines were not necessarily due to fall in foodgrains production; they were also caused by a \u2018failure of entitlements\u2019, or the inability of people to buy or otherwise obtain food. Substantial improvements in the productivity of Indian agriculture (specially through the expansion of irrigation); improved means of communication; and more vigorous relief and preventive measures by the state have all helped to drastically reduce deaths from famine. Nevertheless, starvation deaths are still reported from some backward regions of the country. The Mahatma Gandhi National Rural Employment Guarantee Act is the latest state initiative to tackle the problem of hunger and starvation in rural areas.", "doc_id": "0041f8c6-4abb-11ed-add1-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "Like any Indian, you already know that \u2018caste\u2019 is the name of an ancient social institution that has been part of Indian history and culture for thousands of years. But like any Indian living in the twenty-first century, you also know that something called \u2018caste\u2019 is definitely a part of Indian society today. To what extent are these two \u2018castes\u2019 \u2013 the one that is supposed to be part of India\u2019s past, and the one that is part of its present \u2013 the same thing? This is the question that we will try to answer in this section.\n\nCaste is an institution uniquely associated with the Indian sub-continent. While social arrangements producing similar effects have existed in other parts of the world, the exact form has not been found elsewhere. Although it is an institution characteristic of Hindu society, caste has spread to the major non-Hindu communities of the Indian sub-continent. This is specially true of Muslims, Christians and Sikhs. \n\nAs is well-known, the English word \u2018caste\u2019 is actually a borrowing from the Portuguese casta, meaning pure breed. The word refers to a broad institutional arrangement that in Indian languages (beginning with the ancient Sanskrit) is referred to by two distinct terms, varna and jati. Varna, literally \u2018colour\u2019, is the name given to a four-fold division of society into brahmana, kshatriya, vaishya and shudra, though this excludes a significant section of the population composed of the \u2018outcastes\u2019, foreigners, slaves, conquered peoples and others, sometimes refered to as the panchamas or fifth category. Jati is a generic term referring to species or kinds of anything, ranging from inanimate objects to plants, animals and human beings. Jati is the word most commonly used to refer to the institution of caste in Indian languages, though it is interesting to note that, increasingly, Indian language speakers are beginning to use the English word \u2018caste\u2019.\n\nThese features are the prescribed rules found in ancient scriptural texts. Since these prescriptions were not always practiced, we cannot say to what extent these rules actually determined the empirical reality of caste \u2013 its concrete meaning for the people living at that time. As you can see, most of the prescriptions involved prohibitions or restrictions of various sorts. It is also clear from the historical evidence that caste was a very unequal institution \u2013 some castes benefitted greatly from the system, while others were condemned to a life of endless labour and subordination. Most important, once caste became rigidly determined by birth, it was in principle impossible for a person to ever change their life circumstances. Whether they deserved it or not, an upper caste person would always have high status, while a lower caste person would always be of low status.\n\nTheoretically, the caste system can be understood as the combination of two sets of principles, one based on difference and separation and the other on wholism and hierarchy. Each caste is supposed to be different from \u2013 and is therefore strictly separated from \u2013 every other caste. Many of the scriptural rules of caste are thus designed to prevent the mixing of castes \u2013 rules ranging from marriage, food sharing and social interaction to occupation. On the other hand, these different and separated castes do not have an individual existence \u2013 they can only exist in relation to a larger whole, the totality of society consisting of all castes. Further, this societal whole or system is a hierarchical rather than egalitarian system. Each individual caste occupies not just a distinct place, but also an ordered rank \u2013 a particular position in a ladder-like arrangement going from highest to lowest. \n\nThe hierarchical ordering of castes is based on the distinction between \u2018purity\u2019 and \u2018pollution\u2019. This is a division between something believed to be closer to the sacred (thus connoting ritual purity), and something believed to be distant from or opposed to the sacred, therefore considered ritually polluting. Castes that are considered ritually pure have high status, while those considered less pure or impure have low status. As in all societies, material power (i.e., economic or military power) is closely associated with social status, so that those in power tend to be of high status, and vice versa. Historians believe that those who were defeated in wars were often assigned low caste status.", "doc_id": "825738c2-4ac4-11ed-aeb9-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "The English word \u2018caste\u2019 is actually a borrowing from the Portuguese casta, meaning pure breed. The word refers to a broad institutional arrangement that in Indian languages (beginning with the ancient Sanskrit) is referred to by two distinct terms, varna and jati. Varna, literally \u2018colour\u2019, is the name given to a four-fold division of society into brahmana, kshatriya, vaishya and shudra, though this excludes a significant section of the population composed of the \u2018outcastes\u2019, foreigners, slaves, conquered peoples and others, sometimes refered to as the panchamas or fifth category. Jati is a generic term referring to species or kinds of anything, ranging from inanimate objects to plants, animals and human beings. Jati is the word most commonly used to refer to the institution of caste in Indian languages, though it is interesting to note that, increasingly, Indian language speakers are beginning to use the English word \u2018caste\u2019. \n\nThe precise relationship between varna and jati has been the subject of much speculation and debate among scholars. The most common interpretation is to treat varna as a broad all-India aggregative classification, while jati is taken to be a regional or local sub-classification involving a much more complex system consisting of hundreds or even thousands of castes and sub-castes.\n\nThis means that while the four varna classification is common to all of India, the jati hierarchy has more local classifications that vary from region to region. Opinions also differ on the exact age of the caste system. It is generally agreed, though, that the four varna classification is roughly three thousand years old. However, the \u2018caste system\u2019 stood for different things in different time periods, so that it is misleading to think of the same system continuing for three thousand years. In its earliest phase, in the late Vedic period roughly between 900 \u2014 500 BC, the caste system was really a varna system and consisted of only four major divisions. These divisions were not very elaborate or very rigid, and they were not determined by birth. Movement across the categories seems to have been not only possible but quite common. It is only in the post\u0002Vedic period that caste became the rigid institution that is familiar to us from well known definitions.\n\nThe most commonly cited defining features of caste are the following:\n1. Caste is determined by birth \u2013 a child is \u201cborn into\u201d the caste of its parents. Caste is never a matter of choice. One can never change one\u2019s caste, leave it, or choose not to join it, although there are instances where a person may be expelled from their caste.\n2. Membership in a caste involves strict rules about marriage. Caste groups are \u201cendogamous\u201d, i.e. marriage is restricted to members of the group.\n3. Caste membership also involves rules about food and food-sharing. What kinds of food may or may not be eaten is prescribed and who one may share food with is also specified.\n4. Caste involves a system consisting of many castes arranged in a hierarchy of rank and status. In theory, every person has a caste, and every caste has a specified place in the hierarchy of all castes. While the hierarchical position of many castes, particularly in the middle ranks, may vary from region to region, there is always a hierarchy.\n5. Castes also involve sub-divisions within themselves, i.e., castes almost always have sub-castes and sometimes sub-castes may also have sub\u0002sub-castes. This is referred to as a segmental organisation.\n6. Castes were traditionally linked to occupations. A person born into a caste could only practice the occupation associated with that caste, so that occupations were hereditary, i.e. passed on from generation to generation. On the other hand, a particular occupation could only be pursued by the caste associated with it \u2013 members of other castes could not enter the occupation. \n\nThese features are the prescribed rules found in ancient scriptural texts. Since these prescriptions were not always practiced, we cannot say to what extent these rules actually determined the empirical reality of caste \u2013 its concrete meaning for the people living at that time. As you can see, most of the prescriptions involved prohibitions or restrictions of various sorts. It is also clear from the historical evidence that caste was a very unequal institution \u2013 some castes benefitted greatly from the system, while others were condemned to a life of endless labour and subordination. Most important, once caste became rigidly determined by birth, it was in principle impossible for a person to ever change their life circumstances. Whether they deserved it or not, an upper caste person would always have high status, while a lower caste person would always be of low status.", "doc_id": "1b0f4cf8-4ac5-11ed-a912-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "Not surprisingly, our sources of knowledge about the past and specially the ancient past are inadequate. It is difficult to be very certain about what things were like at that time, or the reasons why some institutions and practices came to be established. But even if we knew all this, it still cannot tell us about what should be done today. Just because something happened in the past or is part of our tradition, it is not necessarily right or wrong forever. Every age has to think afresh about such questions and come to its own collective decision about its social institutions.\n\nCompared to the ancient past, we know a lot more about caste in our recent history. If modern history is taken to begin with the nineteenth century, then Indian Independence in 1947 offers a natural dividing line between the colonial period (roughly 150 years from around 1800 to 1947) and the post-Independence or post-colonial period (the seven decades from 1947 to the present day). The present form of caste as a social institution has been shaped very strongly by both the colonial period as well as the rapid changes that have come about in independent India.\n\nScholars have agreed that all major social institutions and specially the institution of caste underwent major changes during the colonial period. In fact, some scholars argue that what we know today as caste is more a product of colonialism than of ancient Indian tradition. Not all of the changes brought about were intended or deliberate. Initially, the British administrators began by trying to understand the complexities of caste in an effort to learn how to govern the country efficiently. Some of these efforts took the shape of very methodical and intensive surveys and reports on the \u2018customs and manners\u2019 of various tribes and castes all over the country. Many British administrative officials were also amateur ethnologists and took great interest in pursuing such surveys and studies.\n\nBut by far the most important official effort to collect information on caste was through the census. First begun in the 1860s, the census became a regular ten-yearly exercise conducted by the British Indian government from 1881 onwards. The 1901 Census under the direction of Herbert Risley was particularly important as it sought to collect information on the social hierarchy of caste - i.e., the social order of precedence in particular regions, as to the position of each caste in the rank order. This effort had a huge impact on social perceptions of caste and hundreds of petitions were addressed to the Census Commissioner by representatives of different castes claiming a higher position in the social scale and offering historical and scriptural evidence for their claims. Overall, scholars feel that this kind of direct attempt to count caste and to officially record caste status changed the institution itself. Before this kind of intervention, caste identities had been much more fluid and less rigid; once they began to be counted and recorded, caste began to take on a new life.\n\nOther interventions by the colonial state also had an impact on the institution. The land revenue settlements and related arrangements and laws served to give legal recognition to the customary (caste-based) rights of the upper castes. These castes now became land owners in the modern sense rather than feudal classes with claims on the produce of the land, or claims to revenue or tribute of various kinds. Large scale irrigation schemes like the ones in the Punjab were accompanied by efforts to settle populations there, and these also had a caste dimension. At the other end of the scale, towards the end of the colonial period, the administration also took an interest in the welfare of downtrodden castes, referred to as the \u2018depressed classes\u2019 at that time. It was as part of these efforts that the Government of India Act of 1935 was passed which gave legal recognition to the lists or \u2018schedules\u2019 of castes and tribes marked out for special treatment by the state. This is how the terms \u2018Scheduled Tribes\u2019 and the \u2018Scheduled Castes\u2019 came into being. Castes at the bottom of the hierarchy that suffered severe discrimination, including all the so-called \u2018untouchable\u2019 castes, were included among the Scheduled Castes. \n\nThus colonialism brought about major changes in the institution of caste. Perhaps it would be more accurate to say that the institution of caste underwent fundamental changes during the colonial period. Not just India, but the whole world was undergoing rapid change during this period due to the spread of capitalism and modernity.\n\nIndian Independence in 1947 marked a big, but ultimately only partial break with the colonial past. Caste considerations had inevitably played a role in the mass mobilisations of the nationalist movement. Efforts to organise the \u201cdepressed classes\u201d and particularly the untouchable castes predated the nationalist movement, having begun in the second half of the nineteenth century. This was an initiative taken from both ends of the caste spectrum \u2013 by upper caste progressive reformers as well as by members of the lower castes such as Mahatma Jotiba Phule and Babasaheb Ambedkar in western India, Ayyankali, Sri Narayana Guru, Iyotheedass and Periyar (E.V. Ramaswamy Naickar) in the South.", "doc_id": "3ac123c2-4ac6-11ed-a711-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "\u2018Sanskritisation\u2019 refers to a process whereby members of a (usually middle or lower) caste attempt to raise their own social status by adopting the ritual, domestic and social practices of a caste (or castes) of higher status. Although this phenomenon is an old one and predates Independence and perhaps even the colonial period, it has intensified in recent times. The patterns for emulation chosen most often were the brahmin or kshatriya castes; practices included adopting vegetarianism, wearing of sacred thread, performance of specific prayers and religious ceremonies, and so on. Sanskritisation usually accompanies or follows a rise in the economic status of the caste attempting it, though it may also occur independently. Subsequent research has led to many modifications and revisions being suggested for this concept. These include the argument that sanskritisation may be a defiant claiming of previously prohibited ritual/social privileges (such as the wearing of the sacred thread, which used to invite severe punishment) rather than a flattering imitation of the \u2018upper\u2019 castes by the \u2018lower\u2019 castes.\n\n\u2018Dominant caste\u2019 is a term used to refer to those castes which had a large population and were granted landrights by the partial land reforms effected after Independence. The land reforms took away rights from the erstwhile claimants, the upper castes who were \u2018absentee landlords\u2019 in the sense that they played no part in the agricultural economy other than claiming their rent. They frequently did not live in the village either, but were based in towns and cities. These land rights now came to be vested in the next layer of claimants, those who were involved in the management of agriculture but were not themselves the cultivators. These intermediate castes in turn depended on the labour of the lower castes including specially the \u2018untouchable\u2019 castes for tilling and tending the land. However, once they got land rights, they acquired considerable economic power. Their large numbers also gave them political power in the era of electoral democracy based on universal adult franchise. Thus, these intermediate castes became the \u2018dominant\u2019 castes in the country side and played a decisive role in regional politics and the agrarian economy. Examples of such dominant castes include the Yadavs of Bihar and Uttar Pradesh, the Vokkaligas of Karnataka, the Reddys and Khammas of Andhra Pradesh, the Marathas of Maharashtra, the Jats of Punjab, Haryana and Western Uttar Pradesh and the Patidars of Gujarat.\n\nOne of the most significant yet paradoxical changes in the caste system in the contemporary period is that it has tended to become \u2018invisible\u2019 for the upper caste, urban middle and upper classes. For these groups, who have benefited the most from the developmental policies of the post-colonial era, caste has appeared to decline in significance precisely because it has done its job so well. Their caste status had been crucial in ensuring that these groups had the necessary economic and educational resources to take full advantage of the opportunities offered by rapid development. In particular, the upper caste elite were able to benefit from subsidised public education, specially professional education in science, technology, medicine and management. At the same time, they were also able to take advantage of the expansion of public sector jobs in the early decades after Independence. In this initial period, their lead over the rest of society (in terms of education) ensured that they did not face any serious competition. As their privileged status got consolidated in the second and third generations, these groups began to believe that their advancement had little to do with caste. Certainly for the third generations from these groups their economic and educational capital alone is quite sufficient to ensure that they will continue to get the best in terms of life chances. For this group, it now seems that caste plays no part in their public lives, being limited to the personal sphere of religious practice or marriage and kinship. However, a further complication is introduced by the fact that this is a differentiated group. Although the privileged as a group are overwhelmingly upper caste, not all upper caste people are privileged, some being poor. For the so called scheduled castes and tribes and the backward castes \u2013 the opposite has happened. For them, caste has become all too visible, indeed their caste has tended to eclipse the other dimensions of their identities. Because they have no inherited educational and social capital, and because they must compete with an already entrenched upper caste group, they cannot afford to abandon their caste identity for it is one of the few collective assets they have. Moreover, they continue to suffer from discrimination of various kinds. The policies of reservation and other forms of protective discrimination instituted by the state in response to political pressure serve as their lifelines. But using this lifeline tends to make their caste the all-important and often the only aspect of their identity that the world recognises.\n\nThe juxtaposition of these two groups \u2013 a seemingly caste-less upper caste group and an apparently caste-defined lower caste group \u2013 is one of the central aspects of the institution of caste in the present.\n\n\u2018Tribe\u2019 is a modern term for communities that are very old, being among the oldest inhabitants of the sub-continent. Tribes in India have generally been defined in terms of what they were not. Tribes were communities that did not practice a religion with a written text; did not have a state or political form of the normal kind; did not have sharp class divisions; and, most important, they did not have caste and were neither Hindus nor peasants. The term was introduced in the colonial era. The use of a single term for a very disparate set of communities was more a matter of administrative convenience.", "doc_id": "05f1e358-4acf-11ed-9610-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "The family is an integral part of our lives. We take it for granted. We also assume that other people\u2019s families must be like our own. As we saw however, families have different structures and these structures change. Sometimes these changes occur accidentally, as when a war takes place or people migrate in search of work. Sometimes these changes are purposely brought about, as when young people decide to choose their spouses instead of letting elders decide. Or when same sex love is expressed openly in society.\n\nIt is evident from the kind of changes that take place that not only have family structures changed, but cultural ideas, norms and values also change. These changes are however not so easy to bring about. Both history and contemporary times suggest that often change in family and marriage norms are resisted violently. The family has many dimensions to it. In India however discussions on the family have often revolved around the nuclear and extended family.\n\nA nuclear family consists of only one set of parents and their children. An extended family (commonly known as the \u2018joint family\u2019) can take different forms, but has more than one couple, and often more than two generations, living together. This could be a set of brothers with their individual families, or an elderly couple with their sons and grandsons and their respective families. The extended family often is seen as symptomatic of India. Yet this is by no means the dominant form now or earlier. It was confined to certain sections and certain regions of the community. Indeed the term \u2018joint family\u2019 itself is not a native category. As I.P. Desai observes, \u201cThe expression \u2018joint family\u2019 is not the translation of any Indian word like that. It is interesting to note that the words used for joint family in most of the Indian languages are the equivalents of translations of the English word \u2018joint family\u2019.\u201d (Desai 1964:40)\n\nStudies have shown how diverse family forms are found in different societies. With regard to the rule of residence, some societies are matrilocal in their marriage and family customs while others are patrilocal. In the first case, the newly married couple stays with the woman\u2019s parents, whereas in the second case the couple lives with the man\u2019s parents. With regard to the rules of inheritance, matrilineal societies pass on property from mother to daughter while patrilineal societies do so from father to son. A patriarchal family structure exists where the men exercise authority and dominance, and matriarchy where the women play a similarly dominant role. However, matriarchy \u2013 unlike patriarchy \u2013 has been a theoretical rather than an empirical concept. There is no historical or anthropological evidence of matriarchy \u2013 i.e., societies where women exercise dominance. However, there do exist matrilineal societies, i.e., societies where women inherit property from their mothers but do not exercise control over it, nor are they the decision makers in public affairs.\n\nThe Meghalaya Succession Act (passed by an all-male Meghalaya legislative assembly) received the President\u2019s assent in 1986. The Succession Act applies specifically to the Khasi and Jaintia tribes of Meghalaya and confers on \u2018any Khasi and Jaintia of sound mind not being a minor, the right to dispose of his self\u0002acquired property by will\u2019. The practice of making out a will does not exist in Khasi custom. Khasi custom prescribes the devolution of ancestral property in the female line.\n\nThere is a feeling, specially among the educated Khasi, that their rules of kinship and inheritance are biased in favour of women and are too restrictive. The Succession Act is therefore seen as an attempt at removing such restrictions and at correcting the perceived female bias in the Khasi tradition. To assess whether the popular perception of female bias in the Khasi tradition is indeed valid, it is necessary to view the Khasi matrilineal system in the context of the prevalent gender relations and definitions of gender roles.\n\nSeveral scholars have highlighted the inherent contradictions in matrilineal systems. One such contradiction arises from the separation of the line of descent and inheritance on the one hand and the structure of authority and control on the other. The former, which links the mother to the daughter, comes in conflict with the latter, which links the mother\u2019s brother to the sister\u2019s son. [In other words, a woman inherits property from her mother and passes it on to her daughter, while a man controls his sister\u2019s property and passes on control to his sister\u2019s son. Thus, inheritance passes from mother to daughter whereas control passes from (maternal) uncle to nephew.\n\nKhasi matriliny generates intense role conflict for men. They are torn between their responsibilities to their natal house on the one hand, and to their wife and children on the other. In a way, the strain generated by such role conflict affects Khasi women more intensely. A woman can never be fully assured that her husband does not find his sister\u2019s house a more congenial place than her own. Similarly a sister will be apprehensive about her brother\u2019s commitment to her welfare because the wife with whom he lives can always pull him away from his responsibilities to his natal house. \n\nThe women are more adversely affected than men by the role conflict generated in the Khasi matrilineal system not only because men wield power and women are deprived of it, but also because the system is more lenient to men when there is a transgression of rules. Women possess only token authority in Khasi society; it is men who are the defacto power holders. The system is indeed weighted in favour of male matri-kin rather than male patri-kin.", "doc_id": "30607628-4ad3-11ed-9855-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "The discipline of economics is aimed at understanding and explaining how markets work in modern capitalist economies \u2013 for instance, how prices are determined, the probable impact of specific kinds of investment, or the factors that influence people to save or spend. So what does sociology have to contribute to the study of markets that goes beyond what economics can tell us? \n\nTo answer this question, we need to go back briefly to eighteenth century England and the beginnings of modern economics, which at that time was called \u2018political economy\u2019. The most famous of the early political economists was Adam Smith, who in his book, The Wealth of Nations, attempted to understand the market economy that was just emerging at that time. Smith argued that the market economy is made up of a series of individual exchanges or transactions, which automatically create a functioning and ordered system. This happens even though none of the individuals involved in the millions of transactions had intended to create a system. Each person looks only to her or his own self-interest, but in the pursuit of this self-interest the interests of all \u2013 or of society \u2013 also seem to be looked after. In this sense, there seems to be some sort of an unseen force at work that converts what is good for each individual into what is good for society. This unseen force was called \u2018the invisible hand\u2019 by Adam Smith. Thus, Smith argued that the capitalist economy is driven by individual self-interest, and works best when individual buyers and sellers make rational decisions that serve their own interests. Smith used the idea of the \u2018invisible hand\u2019 to argue that society overall benefits when individuals pursue their own self-interest in the market, because it stimulates the economy and creates more wealth. For this reason, Smith supported the idea of a \u2018free market\u2019, that is, a market free from all kinds of regulation whether by the state or otherwise. This economic philosophy was also given the name laissez-faire, a French phrase that means \u2018leave alone\u2019 or \u2018let it be\u2019.\n\nModern economics developed from the ideas of early thinkers such as Adam Smith, and is based on the idea that the economy can be studied as a separate part of society that operates according to its own laws, leaving out the larger social or political context in which markets operate. In contrast to this approach, sociologists have attempted to develop an alternative way of studying economic institutions and processes within the larger social framework. \n\nSociologists view markets as social institutions that are constructed in culturally specific ways. For example, markets are often controlled or organised by particular social groups or classes, and have specific connections to other institutions, social processes and structures. Sociologists often express this idea by saying that economies are socially \u2018embedded\u2019. This is illustrated by two examples, one of a weekly tribal haat, and the other of a \u2018traditional business community\u2019 and its trading networks in colonial India.\n\nIn most agrarian or \u2018peasant\u2019 societies around the world, periodic markets are a central feature of social and economic organisation. Weekly markets bring together people from surrounding villages, who come to sell their agricultural or other produce and to buy manufactured goods and other items that are not available in their villages. They attract traders from outside the local area, as well as moneylenders, entertainers, astrologers, and a host of other specialists offering their services and wares. In rural India there are also specialised markets that take place at less frequent intervals, for instance, cattle markets. These periodic markets link different regional and local economies together, and link them to the wider national economy and to towns and metropolitan centres.", "doc_id": "27a4f462-4ad5-11ed-a9ec-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "The discipline of economics is aimed at understanding and explaining how markets work in modern capitalist economies \u2013 for instance, how prices are determined, the probable impact of specific kinds of investment, or the factors that influence people to save or spend. So what does sociology have to contribute to the study of markets that goes beyond what economics can tell us? \n\nTo answer this question, we need to go back briefly to eighteenth century England and the beginnings of modern economics, which at that time was called \u2018political economy\u2019. The most famous of the early political economists was Adam Smith, who in his book, The Wealth of Nations, attempted to understand the market economy that was just emerging at that time. Smith argued that the market economy is made up of a series of individual exchanges or transactions, which automatically create a functioning and ordered system. This happens even though none of the individuals involved in the millions of transactions had intended to create a system. Each person looks only to her or his own self-interest, but in the pursuit of this self-interest the interests of all \u2013 or of society \u2013 also seem to be looked after. In this sense, there seems to be some sort of an unseen force at work that converts what is good for each individual into what is good for society. This unseen force was called \u2018the invisible hand\u2019 by Adam Smith. Thus, Smith argued that the capitalist economy is driven by individual self-interest, and works best when individual buyers and sellers make rational decisions that serve their own interests. Smith used the idea of the \u2018invisible hand\u2019 to argue that society overall benefits when individuals pursue their own self-interest in the market, because it stimulates the economy and creates more wealth. For this reason, Smith supported the idea of a \u2018free market\u2019, that is, a market free from all kinds of regulation whether by the state or otherwise. This economic philosophy was also given the name laissez-faire, a French phrase that means \u2018leave alone\u2019 or \u2018let it be\u2019.\n\nModern economics developed from the ideas of early thinkers such as Adam Smith, and is based on the idea that the economy can be studied as a separate part of society that operates according to its own laws, leaving out the larger social or political context in which markets operate. In contrast to this approach, sociologists have attempted to develop an alternative way of studying economic institutions and processes within the larger social framework. \n\nSociologists view markets as social institutions that are constructed in culturally specific ways. For example, markets are often controlled or organised by particular social groups or classes, and have specific connections to other institutions, social processes and structures. Sociologists often express this idea by saying that economies are socially \u2018embedded\u2019. This is illustrated by two examples, one of a weekly tribal haat, and the other of a \u2018traditional business community\u2019 and its trading networks in colonial India.\n\nIn most agrarian or \u2018peasant\u2019 societies around the world, periodic markets are a central feature of social and economic organisation. Weekly markets bring together people from surrounding villages, who come to sell their agricultural or other produce and to buy manufactured goods and other items that are not available in their villages. They attract traders from outside the local area, as well as moneylenders, entertainers, astrologers, and a host of other specialists offering their services and wares. In rural India there are also specialised markets that take place at less frequent intervals, for instance, cattle markets. These periodic markets link different regional and local economies together, and link them to the wider national economy and to towns and metropolitan centres.", "doc_id": "43fd2c6a-4ad5-11ed-8e40-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "Modern economics developed from the ideas of early thinkers such as Adam Smith, and is based on the idea that the economy can be studied as a separate part of society that operates according to its own laws, leaving out the larger social or political context in which markets operate. In contrast to this approach, sociologists have attempted to develop an alternative way of studying economic institutions and processes within the larger social framework. \n\nSociologists view markets as social institutions that are constructed in culturally specific ways. For example, markets are often controlled or organised by particular social groups or classes, and have specific connections to other institutions, social processes and structures. Sociologists often express this idea by saying that economies are socially \u2018embedded\u2019. This is illustrated by two examples, one of a weekly tribal haat, and the other of a \u2018traditional business community\u2019 and its trading networks in colonial India.\n\nIn most agrarian or \u2018peasant\u2019 societies around the world, periodic markets are a central feature of social and economic organisation. Weekly markets bring together people from surrounding villages, who come to sell their agricultural or other produce and to buy manufactured goods and other items that are not available in their villages. They attract traders from outside the local area, as well as moneylenders, entertainers, astrologers, and a host of other specialists offering their services and wares. In rural India there are also specialised markets that take place at less frequent intervals, for instance, cattle markets. These periodic markets link different regional and local economies together, and link them to the wider national economy and to towns and metropolitan centres.\n\nThe weekly haat is a common sight in rural and even urban India. In hilly and forested areas (especially those inhabited by adivasis), where settlements are far-flung, roads and communications poor, and the economy relatively undeveloped, the weekly market is the major institution for the exchange of goods as well as for social intercourse. Local people come to the market to sell their agricultural or forest produce to traders, who carry it to the towns for resale, and they buy essentials such as salt and agricultural implements, and consumption items such as bangles and jewellery. But for many visitors, the primary reason to come to the market is social \u2013 to meet kin, to arrange marriages, exchange gossip, and so on. \n\nWhile the weekly market in tribal areas may be a very old institution, its character has changed over time. After these remote areas were brought under the control of the colonial state, they were gradually incorporated into the wider regional and national economies. Tribal areas were \u2018opened up\u2019 by building roads and \u2018pacifying\u2019 the local people (many of whom resisted colonial rule through their so-called \u2018tribal rebellions\u2019), so that the rich forest and mineral resources of these areas could be exploited. This led to the influx of traders, moneylenders, and other non-tribal people from the plains into these areas. The local tribal economy was transformed as forest produce was sold to outsiders, and money and new kinds of goods entered the system. Tribals were also recruited as labourers to work on plantations and mines that were established under colonialism. A \u2018market\u2019 for tribal labour developed during the colonial period. Due to all these changes, local tribal economies became linked into wider markets, usually with very negative consequences for local people. For example, the entry of traders and moneylenders from outside the local area led to the impoverishment of adivasis, many of whom lost their land to outsiders.\n\nThe weekly market as a social institution, the links between the local tribal economy and the outside, and the exploitative economic relationships between adivasis and others, are illustrated by a study of a weekly market in Bastar district. This district is populated mainly by Gonds, an adivasi group. At the weekly market, you find local people, including tribals and non-tribals (mostly Hindus), as well as outsiders \u2013 mainly Hindu traders of various castes. Forest officials also come to the market to conduct business with adivasis who work for the Forest Department, and the market attracts a variety of specialists selling their goods and services. The major goods that are exchanged in the market are manufactured goods (such as jewellery and trinkets, pots and knives), non-local foods (such as salt and haldi (turmeric)), local food and agricultural produce and manufactured items (such as bamboo baskets), and forest produce (such as tamarind and oil-seeds). The forest produce that is brought by the adivasis is purchased by traders who carry it to towns. In the market, the buyers are mostly adivasis while the sellers are mainly caste Hindus. Adivasis earn cash from the sale of forest and agricultural produce and from wage labour, which they spend in the market mainly on low-value trinkets and jewellery, and consumption items such as manufactured cloth.\n\nAccording to Alfred Gell (1982), the anthropologist who studied Dhorai, the market has significance much beyond its economic functions. For example, the layout of the market symbolises the hierarchical inter-group social relations in this region. Different social groups are located according to their position in the caste and social hierarchy as well as in the market system. The wealthy and high-ranking Rajput jeweller and the middle-ranking local Hindu traders sit in the central \u2018zones\u2019, and the tribal sellers of vegetables and local wares in the outer circles. The quality of social relations is expressed in the kinds of goods that are bought and sold, and the way in which transactions are carried out. For instance, interactions between tribals and non-tribal traders are very different than those between Hindus of the same community: they express hierarchy and social distance rather than social equality.", "doc_id": "a25d0a24-4ad9-11ed-8d89-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "In some traditional accounts of Indian economic history, India\u2019s economy and society are seen as unchanging. Economic transformation was thought to have begun only with the advent of colonialism. It was assumed that India consisted of ancient village communities that were relatively self-sufficient, and that their economies were organised primarily on the basis of non-market exchange. Under colonialism and in the early post-independence period, the penetration of the commercial money economy into local agrarian economies, and their incorporation into wider networks of exchange, was thought to have brought about radical social and economic changes in rural and urban society. While colonialism certainly brought about major economic transformations, for example due to the demand that land revenue be paid in cash, recent historical research has shown that much of India\u2019s economy was already extensively monetised (trade was carried out using money) in the late pre-colonial period. And while various kinds of non-market exchange systems (such as the \u2018jajmani system\u2019) did exist in many villages and regions, even during the precolonial period villages were incorporated into wider networks of exchange through which agricultural products and other goods circulated (Bayly 1983, Stein and Subrahmanyam 1996). It now seems that the sharp line that was often drawn between the \u2018traditional\u2019 and the \u2018modern\u2019 (or the pre-capitalist and capitalist) economy is actually rather fuzzy. Recent historical research has also highlighted the extensive and sophisticated trading networks that existed in pre-colonial India. Of course, we know that for centuries India was a major manufacturer and exporter of handloom cloth (both ordinary cotton and luxury silks), as well as the source of many other goods (such as spices) that were in great demand in the global market, especially in Europe. So it is not surprising that pre-colonial India had well-organised manufacturing centres as well as indigenous merchant groups, trading networks, and banking systems that enabled trade to take place within India, and between India and the rest of the world. These traditional trading communities or castes had their own systems of banking and credit. For instance, an important instrument of exchange and credit was the hundi, or bill of exchange (like a credit note), which allowed merchants to engage in long-distance trade. Because trade took place primarily within the caste and kinship networks of these communities, a merchant in one part of the country could issue a hundi that would be honoured by a merchant in another place.\n\nThe Nattukottai Chettiars (or Nakarattars) of Tamil Nadu, provide an interesting illustration of how these indigenous trading networks were organised and worked. A study of this community during the colonial period shows how its banking and trade activities were deeply embedded in the social organisation of the community. The structures of caste, kinship, and family were oriented towards commercial activity, and business activity was carried out within these social structures. As in most \u2018traditional\u2019 merchant communities, Nakarattar banks were basically joint family firms, so that the structure of the business firm was the same as that of the family. Similarly, trading and banking activities were organised through caste and kinship relationships. For instance, their extensive caste-based social networks allowed Chettiar merchants to expand their activities into Southeast Asia and Ceylon. In one view, the economic activities of the Nakarattars represented a kind of indigenous capitalism. This interpretation raises the question of whether there are, or were, forms of \u2018capitalism\u2019 apart from those that arose in Europe (Rudner 1994).\n\nMany sociological studies of the Indian economy have focused on \u2018traditional merchant communities\u2019 or castes such as the Nakarattars. As you have already learned, there is a close connection between the caste system and the economy, in terms of landholding, occupational differentiation, and so on. This is also true in the case of trade and markets. In fact, \u2018Vaisyas\u2019 constitute one of the four varnas \u2013 an indication of the importance of the merchant and of trade or business in Indian society since ancient times. However, like the other varnas, \u2018Vaisya\u2019 is often a status that is claimed or aspired to rather than a fixed identity or social status. Although there are \u2018Vaisya\u2019 communities (such as banias in North India), whose traditional occupation has been trade or commerce for a long time, there are some caste groups that have entered into trade. Such groups tend to acquire or claim \u2018Vaisya\u2019 status in the process of upward mobility. Like the history of all caste communities, in most cases there is a complex relationonship between caste status or identity, and caste practices, including occupation. The \u2018traditional business communities\u2019 in India include not only \u2018Vaisyas\u2019, but also other groups with distinctive religious or other community identities, such as the Parsis, Sindhis, Bohras, or Jains. Merchant communities did not always have a high status in society; for instance, during the colonial period the long\u0002distance trade in salt was controlled by a marginalised \u2018tribal\u2019 group, the Banjaras. In each case, the particular nature of community institutions and ethos gives rise to a different organisation and practice of business.\n\nTo understand the operation of markets in India, both in earlier periods and at present, we can examine how specific arenas of business are controlled by particular communities. One of the reasons for this caste-based specialisation is that trade and commerce often operate through caste and kinship networks, as we have seen in the case of the Nakarattars. Because businessmen are more likely to trust others of their own community or kin group, they tend to do business within such networks rather than with others outside \u2013 and this tends to create a caste monopoly within certain areas of business.", "doc_id": "d036df86-4adb-11ed-993e-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "The Nattukottai Chettiars (or Nakarattars) of Tamil Nadu, provide an interesting illustration of how these indigenous trading networks were organised and worked. A study of this community during the colonial period shows how its banking and trade activities were deeply embedded in the social organisation of the community. The structures of caste, kinship, and family were oriented towards commercial activity, and business activity was carried out within these social structures.\n\nMany sociological studies of the Indian economy have focused on \u2018traditional merchant communities\u2019 or castes such as the Nakarattars. As you have already learned, there is a close connection between the caste system and the economy, in terms of landholding, occupational differentiation, and so on. This is also true in the case of trade and markets. In fact, \u2018Vaisyas\u2019 constitute one of the four varnas \u2013 an indication of the importance of the merchant and of trade or business in Indian society since ancient times. However, like the other varnas, \u2018Vaisya\u2019 is often a status that is claimed or aspired to rather than a fixed identity or social status. Although there are \u2018Vaisya\u2019 communities (such as banias in North India), whose traditional occupation has been trade or commerce for a long time, there are some caste groups that have entered into trade. Such groups tend to acquire or claim \u2018Vaisya\u2019 status in the process of upward mobility.\n\nTo understand the operation of markets in India, both in earlier periods and at present, we can examine how specific arenas of business are controlled by particular communities. One of the reasons for this caste-based specialisation is that trade and commerce often operate through caste and kinship networks, as we have seen in the case of the Nakarattars. Because businessmen are more likely to trust others of their own community or kin group, they tend to do business within such networks rather than with others outside \u2013 and this tends to create a caste monopoly within certain areas of business.\n\nThe advent of colonialism in India produced major upheavals in the economy, causing disruptions in production, trade, and agriculture. A well-known example is the demise of the handloom industry due to the flooding of the market with cheap manufactured textiles from England. Although pre-colonial India already had a complex monetised economy, most historians consider the colonial period to be the turning point. In the colonial era India began to be more fully linked to the world capitalist economy. Before being colonised by the British, India was a major supplier of manufactured goods to the world market. After colonisation, she became a source of raw materials and agricultural products and a consumer of manufactured goods, both largely for the benefit of industrialising England. At the same time, new groups (especially the Europeans) entered into trade and business, sometimes in alliance with existing merchant communities and in some cases by forcing them out. But rather than completely overturning existing economic institutions, the expansion of the market economy in India provided new opportunities to some merchant communities, which were able to improve their position by re-orienting themselves to changing economic circumstances. In some cases, new communities emerged to take advantage of the economic opportunities provided by colonialism, and continued to hold economic power even after Independence.\n\nA good example of this process is provided by the Marwaris, probably the most widespread and best-known business community in India. Represented by leading industrial families such as the Birlas, the community also includes shopkeepers and small traders in the bazaars of towns throughout the country. The Marwaris became a successful business community only during the colonial period, when they took advantage of new opportunities in colonial cities such as Calcutta and settled throughout the country to carry out trade and moneylending. Like the Nakarattars, the success of the Marwaris rested on their extensive social networks, which created the relations of trust necessary to operate their banking system. Many Marwari families accumulated enough wealth to become moneylenders, and by acting as bankers also helped the commercial expansion of the British in India (Hardgrove 2004). In the late colonial period and after Independence, some Marwari families transformed themselves into modern industrialists, and even today Marwaris control more of India\u2019s industry than any other community. This story of the emergence of a new business community under colonialism, and its transformation from small migrant traders to merchant bankers to industrialists, illustrates the importance of the social context to economic processes.", "doc_id": "8b6f5b34-4adc-11ed-82e7-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "The growth of capitalism around the world has meant the extension of markets into places and spheres of life that were previously untouched by this system. Commodification occurs when things that were earlier not traded in the market become commodities. For instance, labour or skills become things that can be bought and sold. According to Marx and other critics of capitalism, the process of commodification has negative social effects. The commodification of labour is one example, but there are many other examples in contemporary society. For instance, there is a controversy about the sale of kidneys by the poor to cater to rich patients who need kidney transplants. According to many people, human organs should not become commodities. In earlier times, human beings themselves were bought and sold as slaves, but today it is considered immoral to treat people as commodities. But in modern society, almost everyone accepts the idea that a person\u2019s labour can be bought, or that other services or skills can be provided in exchange for money. This is a situation that is found only in capitalist societies, according to Marx.\n\nIn contemporary India, we can observe that things or processes that earlier were not part of market exchange become commodified. For example, traditionally, marriages were arranged by families, but now there are professional marriage bureaus and websites that help people to find brides and grooms for a fee. Another example are the many private institutes that offer courses in \u2018personality development\u2019, spoken English, and so on, that teach students (mostly middle class youth) the cultural and social skills required to succeed in the contemporary world. In earlier times, social skills such as good manners and etiquette were imparted mainly through the family. Or we could think of the burgeoning of privately owned schools and colleges and coaching classes as a process of commodification of education.\n\nAnother important feature of capitalist society is that consumption becomes more and more important, not just for economic reasons but because it has symbolic meaning. In modern societies, consumption is an important way in which social distinctions are created and communicated. The consumer conveys a message about his or her socio\u0002economic status or cultural preferences by buying and displaying certain goods, and companies try to sell their goods by appealing to symbols of status or culture. Think of the advertisements that we see every day on television and roadside hoardings, and the meanings that advertisers try to attach to consumer goods in order to sell them. \n\nOne of sociology\u2019s founders, Max Weber, was among the first to point out that the goods that people buy and use are closely related to their status in society. He coined the term status symbol to describe this relationship. For example, among the middle class in India today, the brand of cell phone or the model of car that one owns are important markers of socio-economic status. Weber also wrote about how classes and status groups are differentiated on the basis of their lifestyles. Consumption is one aspect of lifestyle, but it also includes the way you decorate your home and the way you dress, your leisure activities, and many other aspects of daily life. Sociologists study consumption patterns and lifestyles because of their cultural and social significance in modern life.\n\nSince the late 1980s, India has entered a new era in its economic history, following the change in economic policy from one of state-led development to liberalisation. This shift also ushered in the era of globalisation, a period in which the world is becoming increasingly interconnected \u2014 not only economically but also culturally and politically. The term globalisation includes a number of trends, especially the increase in international movement of commodities, money, information, and people, as well as the development of technology (such as in computers, telecommunications, and transport) and other infrastructure to allow this movement.\n\nA central feature of globalisation is the increasing extension and integration of markets around the world. This integration means that changes in a market in one part of the globe may have a profound impact somewhere else far away. For instance, India\u2019s booming software industry may face a slump if the U.S. economy does badly (as happened after the 9/11 attacks on the World Trade Centre in New York), leading to loss of business and jobs here. The software services industries and business process outsourcing (BPO) industries (such as call centres) are some of the major avenues through which India is getting connected to the global economy. Companies based in India provide low-cost services and labour to customers located in the developed countries of the West. We can say that there is now a global market for Indian software labour and other services.", "doc_id": "27ff82ac-4ade-11ed-a715-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "The growth of capitalism around the world has meant the extension of markets into places and spheres of life that were previously untouched by this system. Commodification occurs when things that were earlier not traded in the market become commodities. For instance, labour or skills become things that can be bought and sold. According to Marx and other critics of capitalism, the process of commodification has negative social effects. The commodification of labour is one example, but there are many other examples in contemporary society. For instance, there is a controversy about the sale of kidneys by the poor to cater to rich patients who need kidney transplants. According to many people, human organs should not become commodities. In earlier times, human beings themselves were bought and sold as slaves, but today it is considered immoral to treat people as commodities. But in modern society, almost everyone accepts the idea that a person\u2019s labour can be bought, or that other services or skills can be provided in exchange for money. This is a situation that is found only in capitalist societies, according to Marx.\n\nIn contemporary India, we can observe that things or processes that earlier were not part of market exchange become commodified. For example, traditionally, marriages were arranged by families, but now there are professional marriage bureaus and websites that help people to find brides and grooms for a fee. Another example are the many private institutes that offer courses in \u2018personality development\u2019, spoken English, and so on, that teach students (mostly middle class youth) the cultural and social skills required to succeed in the contemporary world. In earlier times, social skills such as good manners and etiquette were imparted mainly through the family. Or we could think of the burgeoning of privately owned schools and colleges and coaching classes as a process of commodification of education.\n\nAnother important feature of capitalist society is that consumption becomes more and more important, not just for economic reasons but because it has symbolic meaning. In modern societies, consumption is an important way in which social distinctions are created and communicated. The consumer conveys a message about his or her socio\u0002economic status or cultural preferences by buying and displaying certain goods, and companies try to sell their goods by appealing to symbols of status or culture. Think of the advertisements that we see every day on television and roadside hoardings, and the meanings that advertisers try to attach to consumer goods in order to sell them. \n\nOne of sociology\u2019s founders, Max Weber, was among the first to point out that the goods that people buy and use are closely related to their status in society. He coined the term status symbol to describe this relationship. For example, among the middle class in India today, the brand of cell phone or the model of car that one owns are important markers of socio-economic status. Weber also wrote about how classes and status groups are differentiated on the basis of their lifestyles. Consumption is one aspect of lifestyle, but it also includes the way you decorate your home and the way you dress, your leisure activities, and many other aspects of daily life. Sociologists study consumption patterns and lifestyles because of their cultural and social significance in modern life.\n\nSince the late 1980s, India has entered a new era in its economic history, following the change in economic policy from one of state-led development to liberalisation. This shift also ushered in the era of globalisation, a period in which the world is becoming increasingly interconnected \u2014 not only economically but also culturally and politically. The term globalisation includes a number of trends, especially the increase in international movement of commodities, money, information, and people, as well as the development of technology (such as in computers, telecommunications, and transport) and other infrastructure to allow this movement.\n\nA central feature of globalisation is the increasing extension and integration of markets around the world. This integration means that changes in a market in one part of the globe may have a profound impact somewhere else far away. For instance, India\u2019s booming software industry may face a slump if the U.S. economy does badly (as happened after the 9/11 attacks on the World Trade Centre in New York), leading to loss of business and jobs here. The software services industries and business process outsourcing (BPO) industries (such as call centres) are some of the major avenues through which India is getting connected to the global economy. Companies based in India provide low-cost services and labour to customers located in the developed countries of the West. We can say that there is now a global market for Indian software labour and other services.", "doc_id": "471e4150-4ade-11ed-83f0-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "The growth of capitalism around the world has meant the extension of markets into places and spheres of life that were previously untouched by this system. Commodification occurs when things that were earlier not traded in the market become commodities. For instance, labour or skills become things that can be bought and sold. According to Marx and other critics of capitalism, the process of commodification has negative social effects. The commodification of labour is one example, but there are many other examples in contemporary society. For instance, there is a controversy about the sale of kidneys by the poor to cater to rich patients who need kidney transplants. According to many people, human organs should not become commodities. In earlier times, human beings themselves were bought and sold as slaves, but today it is considered immoral to treat people as commodities. But in modern society, almost everyone accepts the idea that a person\u2019s labour can be bought, or that other services or skills can be provided in exchange for money. This is a situation that is found only in capitalist societies, according to Marx.\n\nIn contemporary India, we can observe that things or processes that earlier were not part of market exchange become commodified. For example, traditionally, marriages were arranged by families, but now there are professional marriage bureaus and websites that help people to find brides and grooms for a fee. Another example are the many private institutes that offer courses in \u2018personality development\u2019, spoken English, and so on, that teach students (mostly middle class youth) the cultural and social skills required to succeed in the contemporary world. In earlier times, social skills such as good manners and etiquette were imparted mainly through the family. Or we could think of the burgeoning of privately owned schools and colleges and coaching classes as a process of commodification of education.\n\nAnother important feature of capitalist society is that consumption becomes more and more important, not just for economic reasons but because it has symbolic meaning. In modern societies, consumption is an important way in which social distinctions are created and communicated. The consumer conveys a message about his or her socio\u0002economic status or cultural preferences by buying and displaying certain goods, and companies try to sell their goods by appealing to symbols of status or culture. Think of the advertisements that we see every day on television and roadside hoardings, and the meanings that advertisers try to attach to consumer goods in order to sell them. \n\nOne of sociology\u2019s founders, Max Weber, was among the first to point out that the goods that people buy and use are closely related to their status in society. He coined the term status symbol to describe this relationship. For example, among the middle class in India today, the brand of cell phone or the model of car that one owns are important markers of socio-economic status. Weber also wrote about how classes and status groups are differentiated on the basis of their lifestyles. Consumption is one aspect of lifestyle, but it also includes the way you decorate your home and the way you dress, your leisure activities, and many other aspects of daily life. Sociologists study consumption patterns and lifestyles because of their cultural and social significance in modern life.\n\nSince the late 1980s, India has entered a new era in its economic history, following the change in economic policy from one of state-led development to liberalisation. This shift also ushered in the era of globalisation, a period in which the world is becoming increasingly interconnected \u2014 not only economically but also culturally and politically. The term globalisation includes a number of trends, especially the increase in international movement of commodities, money, information, and people, as well as the development of technology (such as in computers, telecommunications, and transport) and other infrastructure to allow this movement.\n\nA central feature of globalisation is the increasing extension and integration of markets around the world. This integration means that changes in a market in one part of the globe may have a profound impact somewhere else far away. For instance, India\u2019s booming software industry may face a slump if the U.S. economy does badly (as happened after the 9/11 attacks on the World Trade Centre in New York), leading to loss of business and jobs here. The software services industries and business process outsourcing (BPO) industries (such as call centres) are some of the major avenues through which India is getting connected to the global economy. Companies based in India provide low-cost services and labour to customers located in the developed countries of the West. We can say that there is now a global market for Indian software labour and other services.", "doc_id": "7d39055e-4ade-11ed-b22c-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "Under globalisation, not only money and goods, but also people, cultural products, and images circulate rapidly around the world, enter new circuits of exchange, and create new markets. Products, services, or elements of culture that were earlier outside of the market system are drawn into it. An example is the marketing of Indian spirituality and knowledge systems (such as yoga and ayurveda) in the West. The growing market for international tourism also suggests how culture itself may become a commodity. An example is the famous annual fair in Pushkar, Rajasthan, to which pastoralists and traders come from distant places to buy and sell camels and other livestock. While the Pushkar fair continues to be a major social and economic event for local people, it is also marketed internationally as a major tourist attraction. The fair is all the more attractive to tourists because it comes just before a major Hindu religious festival of Kartik Purnima, when pilgrims come to bathe in the holy Pushkar Lake. Thus, Hindu pilgrims, camel traders, and foreign tourists mingle at this event, exchanging not only livestock and money but also cultural symbols and religious merit.\n\nThe globalisation of the Indian economy has been due primarily to the policy of liberalisation that was started in the late 1980s. Liberalisation includes a range of policies such as the privatisation of public sector enterprises (selling government-owned companies to private companies); loosening of government regulations on capital, labour, and trade; a reduction in tariffs and import duties so that foreign goods can be imported more easily; and allowing easier access for foreign companies to set up industries in India. Another word for such changes is marketisation, or the use of markets or market-based processes (rather than government regulations or policies) to solve social, political, or economic problems. These include relaxation or removal of economic controls (deregulation), privatisation of industries, and removing government controls over wages and prices. Those who advocate marketisation believe that these steps will promote economic growth and prosperity because private industry is more efficient than government-owned industry.\n\nThe changes that have been made under the liberalisation programme have stimulated economic growth and opened up Indian markets to foreign companies. For example, many foreign branded goods are now sold, which were not previously available. Increasing foreign investment is supposed to help economic growth and employment. The privatisation of public companies is supposed to increase their efficiency and reduce the government\u2019s burden of running these companies. However, the impact of liberalisation has been mixed. Many people argue that liberalisation and globalisation have had, or will have, a negative net impact on India \u2013 that is, the costs and disadvantages will be more than the advantages and benefits. Some sectors of Indian industry (like software and information technology) or agriculture (like fish or fruit) may benefit from access to a global market, but other sectors (like automobiles, electronics or oilseeds) will lose because they cannot compete with foreign producers.\n\nFor example, Indian farmers are now exposed to competition from farmers in other countries because import of agricultural products is allowed. Earlier, Indian agriculture was protected from the world market by support prices and subsidies. Support prices help to ensure a minimum income for farmers because they are the prices at which the government agrees to buy agricultural commodities. Subsidies lower the cost of farming because the government pays part of the price charged for inputs (such as fertilisers or diesel oil). Liberalisation is against this kind of government interference in markets, so support prices and subsidies are reduced or withdrawn. This means that many farmers are not able to make a decent living from agriculture. Similarly, small manufacturers have been exposed to global competition as foreign goods and brands have entered the market, and some have not been able to compete. The privatisation or closing of public sector industries has led to loss of employment in some sectors, and to growth of unorganised sector employment at the expense of the organised sector. This is not good for workers because the organised sector generally offers better paid and more regular or permanent jobs.\n\nIn this chapter we have seen that there are many different kinds of markets in contemporary India, from the village haat to the virtual stock exchange. These markets are themselves social institutions, and are connected to other aspects of the social structure, such as caste and class, in various ways. In addition, we have learned that exchange has a social and symbolic significance that goes far beyond its immediate economic purpose. Moreover, the ways in which goods and services are exchanged or circulate is rapidly changing due to the liberalisation of the Indian economy and globalisation. There are many different ways and levels at which goods, services, cultural symbols, money, and so on, circulate \u2014 from the local market in a village or town right up to a global trading network such as the Nasdaq. In today\u2019s rapidly changing world, it is important to understand how markets are being constantly transformed, and the broader social and economic consequences of these changes.", "doc_id": "4f3d6436-4ae0-11ed-a5a1-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "In every society, some people have a greater share of valued resources \u2013 money, property, education, health, and power \u2013 than others. These social resources can be divided into three forms of capital \u2013 economic capital in the form of material assets and income; cultural capital such as educational qualifications and status; and social capital in the form of networks of contacts and social associations (Bourdieu 1986). Often, these three forms of capital overlap and one can be converted into the other. For example, a person from a well-off family (economic capital) can afford expensive higher education, and so can acquire cultural or educational capital. Someone with influential relatives and friends (social capital) may \u2013 through access to good advice, recommendations or information \u2013 manage to get a well-paid job.\n\nPatterns of unequal access to social resources are commonly called social inequality. Some social inequality reflects innate differences between individuals for example, their varying abilities and efforts. Someone may be endowed with exceptional intelligence or talent, or may have worked very hard to achieve their wealth and status. However, by and large, social inequality is not the outcome of innate or \u2018natural\u2019 differences between people, but is produced by the society in which they live. Sociologists use the term social stratification to refer to a system by which categories of people in a society are ranked in a hierarchy. This hierarchy then shapes people\u2019s identity and experiences, their relations with others, as well as their access to resources and opportunities. Three key principles help explain social stratification:\n1. Social stratification is a characteristic of society, not simply a function of individual differences. Social stratification is a society-wide system that unequally distributes social resources among categories of people. In the most technologically primitive societies \u2013 hunting and gathering societies, for instance \u2013 little was produced so only rudimentary social stratification could exist. In more technologically advanced societies where people produce a surplus over and above their basic needs, however, social resources are unequally distributed to various social categories regardless of people\u2019s innate individual abilities.\n2. Social stratification persists over generations. It is closely linked to the family and to the inheritance of social resources from one generation to the next. A person\u2019s social position is ascribed. That is, children assume the social positions of their parents. Within the caste system, birth dictates occupational opportunities. A Dalit is likely to be confined to traditional occupations such as agricultural labour, scavenging, or leather work, with little chance of being able to get high-paying white-collar or professional work. The ascribed aspect of social inequality is reinforced by the practice of endogamy. That is, marriage is usually restricted to members of the same caste, ruling out the potential for blurring caste lines through inter\u0002marriage.\n3. Social stratification is supported by patterns of belief, or ideology. No system of social stratification is likely to persist over generations unless it is widely viewed as being either fair or inevitable. The caste system, for example, is justified in terms of the opposition of purity and pollution, with the Brahmins designated as the most superior and Dalits as the most inferior by virtue of their birth and occupation. Not everyone, though, thinks of a system of inequality as legitimate. Typically, people with the greatest social privileges express the strongest support for systems of stratification such as caste and race. Those who have experienced the exploitation and humiliation of being at the bottom of the hierarchy are most likely to challenge it.\n\nOften we discuss social exclusion and discrimination as though they pertain to differential economic resources alone. This however is only partially true. People often face discrimination and exclusion because of their gender, religion, ethnicity, language, caste and disability. Thus, women from a privileged background may face sexual harassment in public places. A middle class professional from a minority religious or ethnic group may find it difficult to get accommodation in a middle class colony even in a metropolitan city. People often harbour prejudices about other social groups. Each of us grows up as a member of a community from which we acquire ideas not just about our \u2018community\u2019, our \u2018caste\u2019 or \u2018class\u2019 our \u2018gender\u2019 but also about others. Often these ideas reflect prejudices. \n\nPrejudices refer to pre-conceived opinions or attitudes held by members of one group towards another. The word literally means \u2018pre-judgement\u2019, that is, an opinion formed in advance of any familiarity with the subject, before considering any available evidence. A prejudiced person\u2019s preconceived views are often based on hearsay rather than on direct evidence, and are resistant to change even in the face of new information. Prejudice may be either positive or negative. Although the word is generally used for negative pre-judgements, it can also apply to favourable pre-judgement. For example, a person may be prejudiced in favour of members of his/her own caste or group and \u2013 without any evidence \u2013 believe them to be superior to members of other castes or groups.\n\nPrejudices are often grounded in stereotypes, fixed and inflexible characterisations of a group of people. Stereotypes are often applied to ethnic and racial groups and to women. In a country such as India, which was colonised for a long time, many of these stereotypes are partly colonial creations. Some communities were characterised as \u2018martial races\u2019, some others as effeminate or cowardly, yet others as untrustworthy. In both English and Indian fictional writings we often encounter an entire group of people classified as \u2018lazy\u2019 or \u2018cunning\u2019. It may indeed be true that some individuals are sometimes lazy or cunning, brave or cowardly. But such a general statement is true of individuals in every group. Even for such individuals, it is not true all the time \u2013 the same individual may be both lazy and hardworking at different times. Stereotypes fix whole groups into single, homogenous categories; they refuse to recognise the variation across individuals and across contexts or across time. They treat an entire community as though it were a single person with a single all-encompassing trait or characteristic.\n\nIf prejudice describes attitudes and opinions, discrimination refers to actual behaviour towards another group or individual. Discrimination can be seen in practices that disqualify members of one group from opportunities open to others, as when a person is refused a job because of their gender or religion. Discrimination can be very hard to prove because it may not be open or explicitly stated. Discriminatory behaviour or practices may be presented as motivated by other, more justifiable, reasons rather than prejudice. For example, the person who is refused a job because of his or her caste may be told that he or she was less qualified than others, and that the selection was done purely on merit.", "doc_id": "cf61a97c-4ae2-11ed-b79c-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "In every society, some people have a greater share of valued resources \u2013 money, property, education, health, and power \u2013 than others. These social resources can be divided into three forms of capital \u2013 economic capital in the form of material assets and income; cultural capital such as educational qualifications and status; and social capital in the form of networks of contacts and social associations (Bourdieu 1986). Often, these three forms of capital overlap and one can be converted into the other. For example, a person from a well-off family (economic capital) can afford expensive higher education, and so can acquire cultural or educational capital. Someone with influential relatives and friends (social capital) may \u2013 through access to good advice, recommendations or information \u2013 manage to get a well-paid job.\n\nPatterns of unequal access to social resources are commonly called social inequality. Some social inequality reflects innate differences between individuals for example, their varying abilities and efforts. Someone may be endowed with exceptional intelligence or talent, or may have worked very hard to achieve their wealth and status. However, by and large, social inequality is not the outcome of innate or \u2018natural\u2019 differences between people, but is produced by the society in which they live. Sociologists use the term social stratification to refer to a system by which categories of people in a society are ranked in a hierarchy. This hierarchy then shapes people\u2019s identity and experiences, their relations with others, as well as their access to resources and opportunities. Three key principles help explain social stratification:\n1. Social stratification is a characteristic of society, not simply a function of individual differences. Social stratification is a society-wide system that unequally distributes social resources among categories of people. In the most technologically primitive societies \u2013 hunting and gathering societies, for instance \u2013 little was produced so only rudimentary social stratification could exist. In more technologically advanced societies where people produce a surplus over and above their basic needs, however, social resources are unequally distributed to various social categories regardless of people\u2019s innate individual abilities.\n2. Social stratification persists over generations. It is closely linked to the family and to the inheritance of social resources from one generation to the next. A person\u2019s social position is ascribed. That is, children assume the social positions of their parents. Within the caste system, birth dictates occupational opportunities. A Dalit is likely to be confined to traditional occupations such as agricultural labour, scavenging, or leather work, with little chance of being able to get high-paying white-collar or professional work. The ascribed aspect of social inequality is reinforced by the practice of endogamy. That is, marriage is usually restricted to members of the same caste, ruling out the potential for blurring caste lines through inter\u0002marriage.\n3. Social stratification is supported by patterns of belief, or ideology. No system of social stratification is likely to persist over generations unless it is widely viewed as being either fair or inevitable. The caste system, for example, is justified in terms of the opposition of purity and pollution, with the Brahmins designated as the most superior and Dalits as the most inferior by virtue of their birth and occupation. Not everyone, though, thinks of a system of inequality as legitimate. Typically, people with the greatest social privileges express the strongest support for systems of stratification such as caste and race. Those who have experienced the exploitation and humiliation of being at the bottom of the hierarchy are most likely to challenge it.\n\nOften we discuss social exclusion and discrimination as though they pertain to differential economic resources alone. This however is only partially true. People often face discrimination and exclusion because of their gender, religion, ethnicity, language, caste and disability. Thus, women from a privileged background may face sexual harassment in public places. A middle class professional from a minority religious or ethnic group may find it difficult to get accommodation in a middle class colony even in a metropolitan city. People often harbour prejudices about other social groups. Each of us grows up as a member of a community from which we acquire ideas not just about our \u2018community\u2019, our \u2018caste\u2019 or \u2018class\u2019 our \u2018gender\u2019 but also about others. Often these ideas reflect prejudices. \n\nPrejudices refer to pre-conceived opinions or attitudes held by members of one group towards another. The word literally means \u2018pre-judgement\u2019, that is, an opinion formed in advance of any familiarity with the subject, before considering any available evidence. A prejudiced person\u2019s preconceived views are often based on hearsay rather than on direct evidence, and are resistant to change even in the face of new information. Prejudice may be either positive or negative. Although the word is generally used for negative pre-judgements, it can also apply to favourable pre-judgement. For example, a person may be prejudiced in favour of members of his/her own caste or group and \u2013 without any evidence \u2013 believe them to be superior to members of other castes or groups.\n\nPrejudices are often grounded in stereotypes, fixed and inflexible characterisations of a group of people. Stereotypes are often applied to ethnic and racial groups and to women. In a country such as India, which was colonised for a long time, many of these stereotypes are partly colonial creations. Some communities were characterised as \u2018martial races\u2019, some others as effeminate or cowardly, yet others as untrustworthy. In both English and Indian fictional writings we often encounter an entire group of people classified as \u2018lazy\u2019 or \u2018cunning\u2019. It may indeed be true that some individuals are sometimes lazy or cunning, brave or cowardly. But such a general statement is true of individuals in every group. Even for such individuals, it is not true all the time \u2013 the same individual may be both lazy and hardworking at different times. Stereotypes fix whole groups into single, homogenous categories; they refuse to recognise the variation across individuals and across contexts or across time. They treat an entire community as though it were a single person with a single all-encompassing trait or characteristic.\n\nIf prejudice describes attitudes and opinions, discrimination refers to actual behaviour towards another group or individual. Discrimination can be seen in practices that disqualify members of one group from opportunities open to others, as when a person is refused a job because of their gender or religion. Discrimination can be very hard to prove because it may not be open or explicitly stated. Discriminatory behaviour or practices may be presented as motivated by other, more justifiable, reasons rather than prejudice. For example, the person who is refused a job because of his or her caste may be told that he or she was less qualified than others, and that the selection was done purely on merit.", "doc_id": "e78e8d26-4ae2-11ed-ad0b-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "In every society, some people have a greater share of valued resources \u2013 money, property, education, health, and power \u2013 than others. These social resources can be divided into three forms of capital \u2013 economic capital in the form of material assets and income; cultural capital such as educational qualifications and status; and social capital in the form of networks of contacts and social associations (Bourdieu 1986). Often, these three forms of capital overlap and one can be converted into the other. For example, a person from a well-off family (economic capital) can afford expensive higher education, and so can acquire cultural or educational capital. Someone with influential relatives and friends (social capital) may \u2013 through access to good advice, recommendations or information \u2013 manage to get a well-paid job.\n\nPatterns of unequal access to social resources are commonly called social inequality. Some social inequality reflects innate differences between individuals for example, their varying abilities and efforts. Someone may be endowed with exceptional intelligence or talent, or may have worked very hard to achieve their wealth and status. However, by and large, social inequality is not the outcome of innate or \u2018natural\u2019 differences between people, but is produced by the society in which they live. Sociologists use the term social stratification to refer to a system by which categories of people in a society are ranked in a hierarchy. This hierarchy then shapes people\u2019s identity and experiences, their relations with others, as well as their access to resources and opportunities. Three key principles help explain social stratification:\n1. Social stratification is a characteristic of society, not simply a function of individual differences. Social stratification is a society-wide system that unequally distributes social resources among categories of people. In the most technologically primitive societies \u2013 hunting and gathering societies, for instance \u2013 little was produced so only rudimentary social stratification could exist. In more technologically advanced societies where people produce a surplus over and above their basic needs, however, social resources are unequally distributed to various social categories regardless of people\u2019s innate individual abilities.\n2. Social stratification persists over generations. It is closely linked to the family and to the inheritance of social resources from one generation to the next. A person\u2019s social position is ascribed. That is, children assume the social positions of their parents. Within the caste system, birth dictates occupational opportunities. A Dalit is likely to be confined to traditional occupations such as agricultural labour, scavenging, or leather work, with little chance of being able to get high-paying white-collar or professional work. The ascribed aspect of social inequality is reinforced by the practice of endogamy. That is, marriage is usually restricted to members of the same caste, ruling out the potential for blurring caste lines through inter\u0002marriage.\n3. Social stratification is supported by patterns of belief, or ideology. No system of social stratification is likely to persist over generations unless it is widely viewed as being either fair or inevitable. The caste system, for example, is justified in terms of the opposition of purity and pollution, with the Brahmins designated as the most superior and Dalits as the most inferior by virtue of their birth and occupation. Not everyone, though, thinks of a system of inequality as legitimate. Typically, people with the greatest social privileges express the strongest support for systems of stratification such as caste and race. Those who have experienced the exploitation and humiliation of being at the bottom of the hierarchy are most likely to challenge it.\n\nOften we discuss social exclusion and discrimination as though they pertain to differential economic resources alone. This however is only partially true. People often face discrimination and exclusion because of their gender, religion, ethnicity, language, caste and disability. Thus, women from a privileged background may face sexual harassment in public places. A middle class professional from a minority religious or ethnic group may find it difficult to get accommodation in a middle class colony even in a metropolitan city. People often harbour prejudices about other social groups. Each of us grows up as a member of a community from which we acquire ideas not just about our \u2018community\u2019, our \u2018caste\u2019 or \u2018class\u2019 our \u2018gender\u2019 but also about others. Often these ideas reflect prejudices. \n\nPrejudices refer to pre-conceived opinions or attitudes held by members of one group towards another. The word literally means \u2018pre-judgement\u2019, that is, an opinion formed in advance of any familiarity with the subject, before considering any available evidence. A prejudiced person\u2019s preconceived views are often based on hearsay rather than on direct evidence, and are resistant to change even in the face of new information. Prejudice may be either positive or negative. Although the word is generally used for negative pre-judgements, it can also apply to favourable pre-judgement. For example, a person may be prejudiced in favour of members of his/her own caste or group and \u2013 without any evidence \u2013 believe them to be superior to members of other castes or groups.\n\nPrejudices are often grounded in stereotypes, fixed and inflexible characterisations of a group of people. Stereotypes are often applied to ethnic and racial groups and to women. In a country such as India, which was colonised for a long time, many of these stereotypes are partly colonial creations. Some communities were characterised as \u2018martial races\u2019, some others as effeminate or cowardly, yet others as untrustworthy. In both English and Indian fictional writings we often encounter an entire group of people classified as \u2018lazy\u2019 or \u2018cunning\u2019. It may indeed be true that some individuals are sometimes lazy or cunning, brave or cowardly. But such a general statement is true of individuals in every group. Even for such individuals, it is not true all the time \u2013 the same individual may be both lazy and hardworking at different times. Stereotypes fix whole groups into single, homogenous categories; they refuse to recognise the variation across individuals and across contexts or across time. They treat an entire community as though it were a single person with a single all-encompassing trait or characteristic.\n\nIf prejudice describes attitudes and opinions, discrimination refers to actual behaviour towards another group or individual. Discrimination can be seen in practices that disqualify members of one group from opportunities open to others, as when a person is refused a job because of their gender or religion. Discrimination can be very hard to prove because it may not be open or explicitly stated. Discriminatory behaviour or practices may be presented as motivated by other, more justifiable, reasons rather than prejudice. For example, the person who is refused a job because of his or her caste may be told that he or she was less qualified than others, and that the selection was done purely on merit.", "doc_id": "0132718e-4ae3-11ed-acd5-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "Social exclusion refers to ways in which individuals may become cut off from full involvement in the wider society. It focuses attention on a broad range of factors that prevent individuals or groups from having opportunities open to the majority of the population. In order to live a full and active life, individuals must not only be able to feed, clothe and house themselves, but should also have access to essential goods and services such as education, health, transportation, insurance, social security, banking and even access to the police or judiciary. Social exclusion is not accidental but systematic \u2013 it is the result of structural features of society.\n\nIt is important to note that social exclusion is involuntary \u2013 that is, exclusion is practiced regardless of the wishes of those who are excluded. For example, rich people are never found sleeping on the pavements or under bridges like thousands of homeless poor people in cities and towns. This does not mean that the rich are being \u2018excluded\u2019 from access to pavements and park benches, because they could certainly gain access if they wanted to, but they choose not to. Social exclusion is sometimes wrongly justified by the same logic \u2013 it is said that the excluded group itself does not wish to participate. The truth of such an argument is not obvious when exclusion is preventing access to something desirable (as different from something clearly undesirable, like sleeping on the pavement).\n\nProlonged experience of discriminatory or insulting behaviour often produces a reaction on the part of the excluded who then stop trying for inclusion. For example, \u2018upper\u2019 caste Hindu communities have often denied entry into temples for the \u2018lower\u2019 castes and specially the Dalits. After decades of such treatment, the Dalits may build their own temple, or convert to another religion like Buddhism, Christianity or Islam. After they do this, they may no longer desire to be included in the Hindu temple or religious events. But this does not mean that social exclusion is not being practiced. The point is that the exclusion occurs regardless of the wishes of the excluded.\n\nIndia like most societies has been marked by acute practices of social discrimination and exclusion. At different periods of history protest movements arose against caste, gender and religious discrimination. Yet prejudices remain and often, new ones emerge. Thus, legislation alone is unable to transform society or produce lasting social change. A constant social campaign to change awareness and sensitivity is required to break them. \n\nYou have already read about the impact of colonialism on Indian society. What discrimination and exclusion mean was brought home to even the most privileged Indians at the hands of the British colonial state. Such experiences were, of course, common to the various socially discriminated groups such as women, dalits and other oppressed castes and tribes. Faced with the humiliation of colonial rule and simultaneously exposed to ideas of democracy and justice, many Indians initiated and participated in a large number of social reform movements. \n\nIn this chapter we focus on four such groups who have suffered from serious social inequality and exclusion, namely Dalits or the ex-untouchable castes; adivasis or communities refered to as \u2018tribal\u2019; women, and the differently abled. We attempt to look at each of their stories of struggles and achievements in the following sections.\n\nApart from these four groups, there are two more groups included in this category like transgender and people of third gender group. Information about these groups are given in Box 5.1a.", "doc_id": "ae72626e-4ae3-11ed-9193-0242ac110007"} {"source": "NCERT XII Sociology, India", "document": "\u2018Untouchability\u2019 is an extreme and particularly vicious aspect of the caste system that prescribes stringent social sanctions against members of castes located at the bottom of the purity-pollution scale. Strictly speaking, the \u2018untouchable\u2019 castes are outside the caste hierarchy \u2013 they are considered to be so \u2018impure\u2019 that their mere touch severely pollutes members of all other castes, bringing terrible punishment for the former and forcing the latter to perform elaborate purification rituals. In fact, notions of \u2018distance pollution\u2019 existed in many regions of India (particularly in the south) such that even the mere presence or the shadow of an \u2018untouchable\u2019 person is considered polluting. Despite the limited literal meaning of the word, the institution of \u2018untouchability\u2019 refers not just to the avoidance or prohibition of physical contact but to a much broader set of social sanctions.\n\nIt is important to emphasise that the three main dimensions of untouchability \u2013 namely, exclusion, humiliation-subordination and exploitation \u2013 are all equally important in defining the phenomenon. Although other (i.e., \u2018touchable\u2019) low castes are also subjected to subordination and exploitation to some degree, they do not suffer the extreme forms of exclusion reserved for \u2018untouchables.\u2019 Dalits experience forms of exclusion that are unique and not practised against other groups \u2013 for instance, being prohibited from sharing drinking water sources or participating in collective religious worship, social ceremonies and festivals. At the same time, untouchability may also involve forced inclusion in a subordinated role, such as being compelled to play the drums at a religious event. The performance of publicly visible acts of (self-)humiliation and subordination is an important part of the practice of untouchability. Common instances include the imposition of gestures of deference (such as taking off headgear, carrying footwear in the hand, standing with bowed head, not wearing Moreover, untouchability is almost always associated with economic exploitation clean or \u2018bright\u2019 clothes, and so on) as well as routinised abuse and humiliation. of various kinds, most commonly through the imposition of forced, unpaid (or under-paid) labour, or the confiscation of property. Finally, untouchability is a pan-Indian phenomenon, although its specific forms and intensity vary considerably across regions and socio-historical contexts.\n\nThe so-called \u2018untouchables\u2019 have been referred to collectively by many names over the centuries. Whatever the specific etymology of these names, they are all derogatory and carry a strongly pejorative charge. In fact, many of them continue to be used as forms of abuse even today, although their use is now a criminal offence. Mahatma Gandhi had popularised the term \u2018Harijan\u2019 (literally, children of God) in the 1930s to counter the pejorative charge carried by caste names.\n\nHowever, the ex-untouchable communities and their leaders have coined another term, \u2018Dalit\u2019, which is now the generally accepted term for referring to these groups. In Indian languages, the term Dalit literally means \u2018downtrodden\u2019 and conveys the sense of an oppressed people. Though it was neither coined by Dr. Ambedkar nor frequently used by him, the term certainly resonates with his philosophy and the movement for empowerment that he led. It received wide currency during the caste riots in Mumbai in the early 1970s. The Dalit Panthers, a radical group that emerged in western India during that time, used the term to assert their identity as part of their struggle for rights and dignity.", "doc_id": "705d2738-4ae4-11ed-b9ce-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Each and every organism can live only for a certain period of time. The period from birth to the natural death of an organism represents its life span. Life spans of a few organisms are given in Figure 1.1. Several other organisms are drawn for which you should find out their life spans and write in the spaces provided. Examine the life spans of organisms represented in the Figure 1.1. Isn\u2019t it both interesting and intriguing to note that it may be as short as a few days or as long as a few thousand years? Between these two extremes are the life spans of most other living organisms. You may note that life spans of organisms are not necessarily correlated with their sizes; the sizes of crows and parrots are not very different yet their life spans show a wide difference. Similarly, a mango tree has a much shorter life span as compared to a peepal tree. Whatever be the life span, death of every individual organism is a certainty, i.e., no individual is immortal, except single-celled organisms. Why do we say there is no natural death in single-celled organisms? Given this reality, have you ever wondered how vast number of plant and animal species have existed on earth for several thousands of years? There must be some processes in living organisms that ensure this continuity. Yes, we are talking about reproduction, something that we take for granted. \n\nReproduction is defined as a biological process in which an organism gives rise to young ones (offspring) similar to itself. The offspring grow, mature and in turn produce new offspring. Thus, there is a cycle of birth, growth and death. Reproduction enables the continuity of the species, generation after generation. You will study later in Chapter 5 (Principles of Inheritance and Variation) how genetic variation is created and inherited during reproduction.\n\nThere is a large diversity in the biological world and each organism has evolved its own mechanism to multiply and produce offspring. The organism\u2019s habitat, its internal physiology and several other factors are collectively responsible for how it reproduces. Based on whether there is participation of one organism or two in the process of reproduction, it is of two types. When offspring is produced by a single parent with or without the involvement of gamete formation, the reproduction is asexual. When two parents (opposite sex) participate in the reproductive process and also involve fusion of male and female gametes, it is called sexual reproduction.\n\nIn this method, a single individual (parent) is capable of producing offspring. As a result, the offspring that are produced are not only identical to one another but are also exact copies of their parent. Are these offspring likely to be genetically identical or different? The term clone is used to describe such morphologically and genetically similar individuals.\n\nLet us see how widespread asexual reproduction is, among different groups of organisms. Asexual reproduction is common among single-celled organisms, and in plants and animals with relatively simple organisations. In Protists and Monerans, the organism or the parent cell divides by mitosis into two to give rise to new individuals (Figure1.2). Thus, in these organisms cell division is itself a mode of reproduction.\n\nMany single-celled organisms reproduce by binary fission, where a cell divides into two halves and each rapidly grows into an adult (e.g., Amoeba, Paramecium). In yeast, the division is unequal and small buds are produced that remain attached initially to the parent cell which, eventually gets separated and mature into new yeast organisms (cells). Under unfavourable condition the Amoeba withdraws its pseudopodia and secretes a three-layered hard covering or cyst around itself. This phenomenon is termed as encystation. When favourable conditions return, the encysted Amoeba divides by multiple fission and produces many minute amoeba or pseudopodiospores; the cyst wall bursts out, and the spores are liberated in the surrounding medium to grow up into many amoebae. This phenomenon is known as sporulation.\n\nMembers of the Kingdom Fungi and simple plants such as algae reproduce through special asexual reproductive structures (Figure 1.3). The most common of these structures are zoospores that usually are microscopic motile structures. Other common asexual reproductive structures are conidia (Penicillium), buds (Hydra) and gemmules (sponge). You have learnt about vegetative reproduction in plants in Class XI. What do you think \u2013 Is vegetative reproduction also a type of asexual reproduction? Why do you say so? Is the term clone applicable to the offspring formed by vegetative reproduction? While in animals and other simple organisms the term asexual is used unambiguously, in plants, the term vegetative reproduction is frequently used. In plants, the units of vegetative propagation such as runner, rhizome, sucker, tuber, offset, bulb are all capable of giving rise to new offspring (Figure1.4). These structures are called vegetative propagules.", "doc_id": "dcea764e-4b02-11ed-b4f9-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Asexual Reproduction\nIn this method, a single individual (parent) is capable of producing offspring. As a result, the offspring that are produced are not only identical to one another but are also exact copies of their parent. Are these offspring likely to be genetically identical or different? The term clone is used to describe such morphologically and genetically similar individuals.\n\nLet us see how widespread asexual reproduction is, among different groups of organisms. Asexual reproduction is common among single-celled organisms, and in plants and animals with relatively simple organisations. In Protists and Monerans, the organism or the parent cell divides by mitosis into two to give rise to new individuals (Figure1.2). Thus, in these organisms cell division is itself a mode of reproduction.\n\nMany single-celled organisms reproduce by binary fission, where a cell divides into two halves and each rapidly grows into an adult (e.g., Amoeba, Paramecium). In yeast, the division is unequal and small buds are produced that remain attached initially to the parent cell which, eventually gets separated and mature into new yeast organisms (cells). Under unfavourable condition the Amoeba withdraws its pseudopodia and secretes a three-layered hard covering or cyst around itself. This phenomenon is termed as encystation. When favourable conditions return, the encysted Amoeba divides by multiple fission and produces many minute amoeba or pseudopodiospores; the cyst wall bursts out, and the spores are liberated in the surrounding medium to grow up into many amoebae. This phenomenon is known as sporulation.\n\nMembers of the Kingdom Fungi and simple plants such as algae reproduce through special asexual reproductive structures (Figure 1.3). The most common of these structures are zoospores that usually are microscopic motile structures. Other common asexual reproductive structures are conidia (Penicillium), buds (Hydra) and gemmules (sponge). You have learnt about vegetative reproduction in plants in Class XI. What do you think \u2013 Is vegetative reproduction also a type of asexual reproduction? Why do you say so? Is the term clone applicable to the offspring formed by vegetative reproduction? While in animals and other simple organisms the term asexual is used unambiguously, in plants, the term vegetative reproduction is frequently used. In plants, the units of vegetative propagation such as runner, rhizome, sucker, tuber, offset, bulb are all capable of giving rise to new offspring (Figure1.4). These structures are called vegetative propagules.\n\nObviously, since the formation of these structures does not involve two parents, the process involved is asexual. In some organisms, if the body breaks into distinct pieces (fragments) each fragment grows into an adult capable of producing offspring (e.g., Hydra). This is also a mode of asexual reproduction called fragmentation. You must have heard about the scourge of the water bodies or about the \u2018terror of Bengal\u2019. This is nothing but the aquatic plant \u2018water hyacinth\u2019 which is one of the most invasive weeds found growing wherever there is standing water. It drains oxygen from the water, which leads to death of fishes. You will learn more about it in Chapters 13 and 14. You may find it interesting to know that this plant was introduced in India because of its beautiful flowers and shape of leaves. Since it can propagate vegetatively at a phenomenal rate and spread all over the water body in a short period of time, it is very difficult to get rid off them.\n\nAre you aware how plants like potato, sugarcane, banana, ginger, dahlia are cultivated? Have you seen small plants emerging from the buds (called eyes) of the potato tuber, from the rhizomes of banana and ginger? When you carefully try to determine the site of origin of the new plantlets in the plants listed above, you will notice that they invariably arise from the nodes present in the modified stems of these plants. When the nodes come in contact with damp soil or water, they produce roots and new plants. Similarly, adventitious buds arise from the notches present at margins of leaves of Bryophyllum. This ability is fully exploited by gardeners and farmers for commercial propagation of such plants. It is interesting to note that asexual reproduction is the common method of reproduction in organisms that have a relatively simple organisation, like algae and fungi and that they shift to sexual method of reproduction just before the onset of adverse conditions. Find out how sexual reproduction enables these organisms to survive during unfavourable conditions? Why is sexual reproduction favoured under such conditions? Asexual (vegetative) as well as sexual modes of reproduction are exhibited by the higher plants. On the other hand, only sexual mode of reproduction is present in most of the animals.", "doc_id": "7c86220c-4b03-11ed-a894-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Asexual Reproduction\nIn this method, a single individual (parent) is capable of producing offspring. As a result, the offspring that are produced are not only identical to one another but are also exact copies of their parent. Are these offspring likely to be genetically identical or different? The term clone is used to describe such morphologically and genetically similar individuals.\n\nLet us see how widespread asexual reproduction is, among different groups of organisms. Asexual reproduction is common among single-celled organisms, and in plants and animals with relatively simple organisations. In Protists and Monerans, the organism or the parent cell divides by mitosis into two to give rise to new individuals (Figure1.2). Thus, in these organisms cell division is itself a mode of reproduction.\n\nMany single-celled organisms reproduce by binary fission, where a cell divides into two halves and each rapidly grows into an adult (e.g., Amoeba, Paramecium). In yeast, the division is unequal and small buds are produced that remain attached initially to the parent cell which, eventually gets separated and mature into new yeast organisms (cells). Under unfavourable condition the Amoeba withdraws its pseudopodia and secretes a three-layered hard covering or cyst around itself. This phenomenon is termed as encystation. When favourable conditions return, the encysted Amoeba divides by multiple fission and produces many minute amoeba or pseudopodiospores; the cyst wall bursts out, and the spores are liberated in the surrounding medium to grow up into many amoebae. This phenomenon is known as sporulation.\n\nMembers of the Kingdom Fungi and simple plants such as algae reproduce through special asexual reproductive structures (Figure 1.3). The most common of these structures are zoospores that usually are microscopic motile structures. Other common asexual reproductive structures are conidia (Penicillium), buds (Hydra) and gemmules (sponge). You have learnt about vegetative reproduction in plants in Class XI. What do you think \u2013 Is vegetative reproduction also a type of asexual reproduction? Why do you say so? Is the term clone applicable to the offspring formed by vegetative reproduction? While in animals and other simple organisms the term asexual is used unambiguously, in plants, the term vegetative reproduction is frequently used. In plants, the units of vegetative propagation such as runner, rhizome, sucker, tuber, offset, bulb are all capable of giving rise to new offspring (Figure1.4). These structures are called vegetative propagules.\n\nObviously, since the formation of these structures does not involve two parents, the process involved is asexual. In some organisms, if the body breaks into distinct pieces (fragments) each fragment grows into an adult capable of producing offspring (e.g., Hydra). This is also a mode of asexual reproduction called fragmentation. You must have heard about the scourge of the water bodies or about the \u2018terror of Bengal\u2019. This is nothing but the aquatic plant \u2018water hyacinth\u2019 which is one of the most invasive weeds found growing wherever there is standing water. It drains oxygen from the water, which leads to death of fishes. You will learn more about it in Chapters 13 and 14. You may find it interesting to know that this plant was introduced in India because of its beautiful flowers and shape of leaves. Since it can propagate vegetatively at a phenomenal rate and spread all over the water body in a short period of time, it is very difficult to get rid off them.\n\nAre you aware how plants like potato, sugarcane, banana, ginger, dahlia are cultivated? Have you seen small plants emerging from the buds (called eyes) of the potato tuber, from the rhizomes of banana and ginger? When you carefully try to determine the site of origin of the new plantlets in the plants listed above, you will notice that they invariably arise from the nodes present in the modified stems of these plants. When the nodes come in contact with damp soil or water, they produce roots and new plants. Similarly, adventitious buds arise from the notches present at margins of leaves of Bryophyllum. This ability is fully exploited by gardeners and farmers for commercial propagation of such plants. It is interesting to note that asexual reproduction is the common method of reproduction in organisms that have a relatively simple organisation, like algae and fungi and that they shift to sexual method of reproduction just before the onset of adverse conditions. Find out how sexual reproduction enables these organisms to survive during unfavourable conditions? Why is sexual reproduction favoured under such conditions? Asexual (vegetative) as well as sexual modes of reproduction are exhibited by the higher plants. On the other hand, only sexual mode of reproduction is present in most of the animals.", "doc_id": "ce38fa6a-4b04-11ed-8934-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Are you aware how plants like potato, sugarcane, banana, ginger, dahlia are cultivated? Have you seen small plants emerging from the buds (called eyes) of the potato tuber, from the rhizomes of banana and ginger? When you carefully try to determine the site of origin of the new plantlets in the plants listed above, you will notice that they invariably arise from the nodes present in the modified stems of these plants. When the nodes come in contact with damp soil or water, they produce roots and new plants. Similarly, adventitious buds arise from the notches present at margins of leaves of Bryophyllum. This ability is fully exploited by gardeners and farmers for commercial propagation of such plants. It is interesting to note that asexual reproduction is the common method of reproduction in organisms that have a relatively simple organisation, like algae and fungi and that they shift to sexual method of reproduction just before the onset of adverse conditions. Find out how sexual reproduction enables these organisms to survive during unfavourable conditions? Why is sexual reproduction favoured under such conditions? Asexual (vegetative) as well as sexual modes of reproduction are exhibited by the higher plants. On the other hand, only sexual mode of reproduction is present in most of the animals.\n\nSexual reproduction involves formation of the male and female gametes, either by the same individual or by different individuals of the opposite sex. These gametes fuse to form the zygote which develops to form the new organism. It is an elaborate, complex and slow process as compared to asexual reproduction. Because of the fusion of male and female gametes, sexual reproduction results in offspring that are not identical to the parents or amongst themselves. \n\nA study of diverse organisms\u2013plants, animals or fungi\u2013show that though they differ so greatly in external morphology, internal structure and physiology, when it comes to sexual mode of reproduction, surprisingly, they share a similar pattern. Let us first discuss what features are common to these diverse organisms. ,All organisms have to reach a certain stage of growth and maturity in their life, before they can reproduce sexually. That period of growth is called the juvenile phase. It is known as vegetative phase in plants. This phase is of variable durations in different organisms. The end of juvenile/vegetative phase which marks the beginning of the reproductive phase can be seen easily in the higher plants when they come to flower. How long does it take for marigold/rice/wheat/coconut/mango plants to come to flower? In some plants, where flowering occurs more than once, what would you call the inter-flowering period \u2013 juvenile or mature?\n\nObserve a few trees in your area. Do they flower during the same month year after year? Why do you think the availability of fruits like mango, apple, jackfruit, etc., is seasonal? Are there some plants that flower throughout the year and some others that show seasonal flowering? Plants \u2013the annual and biennial types, show clear cut vegetative, reproductive and senescent phases, but in the perennial species it is very difficult to clearly define these phases. A few plants exhibit unusual flowering phenomenon; some of them such as bamboo species flower only once in their life time, generally after 50-100 years, produce large number of fruits and die. Another plant, Strobilanthus kunthiana (neelakuranji), flowers once in 12 years. As many of you would know, this plant flowered during September-October 2006. Its mass flowering transformed large tracks of hilly areas in Kerala, Karnataka and Tamil Nadu into blue stretches and attracted a large number of tourists. In animals, the juvenile phase is followed by morphological and physiological changes prior to active reproductive behaviour. The reproductive phase is also of variable duration in different organisms.\n\nAmong animals, for example birds, do they lay eggs all through the year? Or is it a seasonal phenomenon? What about other animals like frogs and lizards? You will notice that, birds living in nature lay eggs only seasonally. However, birds in captivity (as in poultry farms) can be made to lay eggs throughout the year. In this case, laying eggs is not related to reproduction but is a commercial exploitation for human welfare. The females of placental mammals exhibit cyclical changes in the activities of ovaries and accessory ducts as well as hormones during the reproductive phase. In non-primate mammals like cows, sheep, rats, deers, dogs, tiger, etc., such cyclical changes during reproduction are called oestrus cycle where as in primates (monkeys, apes, and humans) it is called menstrual cycle. Many mammals, especially those living in natural, wild conditions exhibit such cycles only during favourable seasons in their reproductive phase and are therefore called seasonal breeders. Many other mammals are reproductively active throughout their reproductive phase and hence are called continuous breeders.\n\nThat we all grow old (if we live long enough), is something that we recognise. But what is meant by growing old? The end of reproductive phase can be considered as one of the parameters of senescence or old age. There are concomitant changes in the body (like slowing of metabolism, etc.) during this last phase of life span. Old age ultimately leads to death.\n\nIn both plants and animals, hormones are responsible for the transitions between the three phases. Interaction between hormones and certain environmental factors regulate the reproductive processes and the associated behavioural expressions of organisms.\n\nEvents in sexual reproduction : After attainment of maturity, all sexually reproducing organisms exhibit events and processes that have remarkable fundamental similarity, even though the structures associated with sexual reproduction are indeed very different. The events of sexual reproduction though elaborate and complex, follow a regular sequence. Sexual reproduction is characterised by the fusion (or fertilisation) of the male and female gametes, the formation of zygote and embryogenesis. For convenience these sequential events may be grouped into three distinct stages namely, the pre-fertilisation, fertilisation and the post-fertilisation events.", "doc_id": "457932f2-4b05-11ed-bc1e-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Are you aware how plants like potato, sugarcane, banana, ginger, dahlia are cultivated? Have you seen small plants emerging from the buds (called eyes) of the potato tuber, from the rhizomes of banana and ginger? When you carefully try to determine the site of origin of the new plantlets in the plants listed above, you will notice that they invariably arise from the nodes present in the modified stems of these plants. When the nodes come in contact with damp soil or water, they produce roots and new plants. Similarly, adventitious buds arise from the notches present at margins of leaves of Bryophyllum. This ability is fully exploited by gardeners and farmers for commercial propagation of such plants. It is interesting to note that asexual reproduction is the common method of reproduction in organisms that have a relatively simple organisation, like algae and fungi and that they shift to sexual method of reproduction just before the onset of adverse conditions. Find out how sexual reproduction enables these organisms to survive during unfavourable conditions? Why is sexual reproduction favoured under such conditions? Asexual (vegetative) as well as sexual modes of reproduction are exhibited by the higher plants. On the other hand, only sexual mode of reproduction is present in most of the animals.\n\nSexual reproduction involves formation of the male and female gametes, either by the same individual or by different individuals of the opposite sex. These gametes fuse to form the zygote which develops to form the new organism. It is an elaborate, complex and slow process as compared to asexual reproduction. Because of the fusion of male and female gametes, sexual reproduction results in offspring that are not identical to the parents or amongst themselves. \n\nA study of diverse organisms\u2013plants, animals or fungi\u2013show that though they differ so greatly in external morphology, internal structure and physiology, when it comes to sexual mode of reproduction, surprisingly, they share a similar pattern. Let us first discuss what features are common to these diverse organisms. ,All organisms have to reach a certain stage of growth and maturity in their life, before they can reproduce sexually. That period of growth is called the juvenile phase. It is known as vegetative phase in plants. This phase is of variable durations in different organisms. The end of juvenile/vegetative phase which marks the beginning of the reproductive phase can be seen easily in the higher plants when they come to flower. How long does it take for marigold/rice/wheat/coconut/mango plants to come to flower? In some plants, where flowering occurs more than once, what would you call the inter-flowering period \u2013 juvenile or mature?\n\nObserve a few trees in your area. Do they flower during the same month year after year? Why do you think the availability of fruits like mango, apple, jackfruit, etc., is seasonal? Are there some plants that flower throughout the year and some others that show seasonal flowering? Plants \u2013the annual and biennial types, show clear cut vegetative, reproductive and senescent phases, but in the perennial species it is very difficult to clearly define these phases. A few plants exhibit unusual flowering phenomenon; some of them such as bamboo species flower only once in their life time, generally after 50-100 years, produce large number of fruits and die. Another plant, Strobilanthus kunthiana (neelakuranji), flowers once in 12 years. As many of you would know, this plant flowered during September-October 2006. Its mass flowering transformed large tracks of hilly areas in Kerala, Karnataka and Tamil Nadu into blue stretches and attracted a large number of tourists. In animals, the juvenile phase is followed by morphological and physiological changes prior to active reproductive behaviour. The reproductive phase is also of variable duration in different organisms.\n\nAmong animals, for example birds, do they lay eggs all through the year? Or is it a seasonal phenomenon? What about other animals like frogs and lizards? You will notice that, birds living in nature lay eggs only seasonally. However, birds in captivity (as in poultry farms) can be made to lay eggs throughout the year. In this case, laying eggs is not related to reproduction but is a commercial exploitation for human welfare. The females of placental mammals exhibit cyclical changes in the activities of ovaries and accessory ducts as well as hormones during the reproductive phase. In non-primate mammals like cows, sheep, rats, deers, dogs, tiger, etc., such cyclical changes during reproduction are called oestrus cycle where as in primates (monkeys, apes, and humans) it is called menstrual cycle. Many mammals, especially those living in natural, wild conditions exhibit such cycles only during favourable seasons in their reproductive phase and are therefore called seasonal breeders. Many other mammals are reproductively active throughout their reproductive phase and hence are called continuous breeders.\n\nThat we all grow old (if we live long enough), is something that we recognise. But what is meant by growing old? The end of reproductive phase can be considered as one of the parameters of senescence or old age. There are concomitant changes in the body (like slowing of metabolism, etc.) during this last phase of life span. Old age ultimately leads to death.\n\nIn both plants and animals, hormones are responsible for the transitions between the three phases. Interaction between hormones and certain environmental factors regulate the reproductive processes and the associated behavioural expressions of organisms.\n\nEvents in sexual reproduction : After attainment of maturity, all sexually reproducing organisms exhibit events and processes that have remarkable fundamental similarity, even though the structures associated with sexual reproduction are indeed very different. The events of sexual reproduction though elaborate and complex, follow a regular sequence. Sexual reproduction is characterised by the fusion (or fertilisation) of the male and female gametes, the formation of zygote and embryogenesis. For convenience these sequential events may be grouped into three distinct stages namely, the pre-fertilisation, fertilisation and the post-fertilisation events.", "doc_id": "6ed6b066-4b05-11ed-83b3-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "After their formation, male and female gametes must be physically brought together to facilitate fusion (fertilisation). Have you ever wondered how the gametes meet? In a majority of organisms, male gamete is motile and the female gamete is stationary. Exceptions are a few fungi and algae in which both types of gametes are motile (Figure1.7a). There is a need for a medium through which the male gametes move. In several simple plants like algae, bryophytes and pteridophytes, water is the medium through which this gamete transfer takes place. A large number of the male gametes, however, fail to reach the female gametes. To compensate this loss of male gametes during transport, the number of male gametes produced is several thousand times the number of female gametes produced.\n\nIn seed plants, pollen grains are the carriers of male gametes and ovule have the egg. Pollen grains produced in anthers therefore, have to be transferred to the stigma before it can lead to fertilisation (Figure 1.7b). In bisexual, self-fertilising plants, e.g., peas, transfer of pollen grains to the stigma is relatively easy as anthers and stigma are located close to each other; pollen grains soon after they are shed, come in contact with the stigma. But in cross pollinating plants (including dioecious plants), a specialised event called pollination facilitates transfer of pollen grains to the stigma. Pollen grains germinate on the stigma and the pollen tubes carrying the male gametes reach the ovule and discharge male gametes near the egg. In dioecious animals, since male and female gametes are formed in different individuals, the organism must evolve a special mechanism for gamete transfer. Successful transfer and coming together of gametes is essential for the most critical event in sexual reproduction, the fertilisation.\n\nThe most vital event of sexual reproduction is perhaps the fusion of gametes. This process called syngamy results in the formation of a diploid zygote. The term fertilisation is also often used for this process. The terms syngamy and fertilisation are frequently used though , interchangeably. What would happen if syngamy does not occur? However, it has to be mentioned here that in some organisms like rotifers, honeybees and even some lizards and birds (turkey), the female gamete undergoes development to form new organisms without fertilisation. This phenomenon is called parthenogenesis. Where does syngamy occur? In most aquatic organisms, such as a majority of algae and fishes as well as amphibians, syngamy occurs in the external medium (water), i.e., outside the body of the organism. This type of gametic fusion is called external fertilisation. Organisms exhibiting external fertilisation show great synchrony between the sexes and release a large number of gametes into the surrounding medium (water) in order to enhance the chances of syngamy. This happens in the bony fishes and frogs where a large number of offspring are produced. A major disadvantage is that the offspring are extremely vulnerable to predators threatening their survival up to adulthood.\n\nIn many terrestrial organisms, belonging to fungi, higher animals such as reptiles, birds, mammals and in a majority of plants (bryophytes, pteridophytes, gymnosperms and angiosperms), syngamy occurs inside the body of the organism, hence the process is called internal fertilisation. In all these organisms, egg is formed inside the female body where they fuse with the male gamete. In organisms exhibiting internal fertilisation, the male gamete is motile and has to reach the egg in order to fuse with it. In these even though the number of sperms produced is very large, there is a significant reduction in the number of eggs produced. In seed plants, however, the non-motile male gametes are carried to female gamete by pollen tubes.", "doc_id": "baec1d7c-4b16-11ed-b0b2-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "In many terrestrial organisms, belonging to fungi, higher animals such as reptiles, birds, mammals and in a majority of plants (bryophytes, pteridophytes, gymnosperms and angiosperms), syngamy occurs inside the body of the organism, hence the process is called internal fertilisation. In all these organisms, egg is formed inside the female body where they fuse with the male gamete. In organisms exhibiting internal fertilisation, the male gamete is motile and has to reach the egg in order to fuse with it. In these even though the number of sperms produced is very large, there is a significant reduction in the number of eggs produced. In seed plants, however, the non-motile male gametes are carried to female gamete by pollen tubes.\n\nEvents in sexual reproduction after the formation of zygote are called post-fertilisation events.\n\nFormation of the diploid zygote is universal in all sexually reproducing organisms. In organisms with external fertilisation, zygote is formed in the external medium (usually water), whereas in those exhibiting internal fertilisation, zygote is formed inside the body of the organism. \n\nFurther development of the zygote depends on the type of life cycle the organism has and the environment it is exposed to. In organisms belonging to fungi and algae, zygote develops a thick wall that is resistant to dessication and damage. It undergoes a period of rest before germination. In organisms with haplontic life cycle, zygote divides by meiosis to form haploid spores that grow into haploid individuals. \n\nZygote is the vital link that ensures continuity of species between organisms of one generation and the next. Every sexually reproducing organism, including human beings begin life as a single cell\u2013the zygote.\n\nEmbryogenesis refers to the process of development of embryo from the zygote. During embryogenesis, zygote undergoes cell division (mitosis) and cell differentiation. While cell divisions increase the number of cells in the developing embryo; cell differentiation helps groups of cells to undergo certain modifications to form specialised tissues and organs to form an organism. You have studied about the process of cell division and differentiation in the previous class.\n\nAnimals are categorised into oviparous and viviparous based on whether the development of the zygote takes place outside the body of the female parent or inside, i.e., whether they lay fertilised/unfertilised eggs or give birth to young ones. In oviparous animals like reptiles and birds, the fertilised eggs covered by hard calcareous shell are laid in a safe place in the environment; after a period of incubation young ones hatch out. On the other hand, in viviparous animals (majority of mammals including human beings), the zygote develops into a young one inside the body of the female organism. After attaining a certain stage of growth, the young ones are delivered out of the body of the female organism. Because of proper embryonic care and protection, the chances of survival of young ones is greater in viviparous organisms. \n\nIn flowering plants, the zygote is formed inside the ovule. After fertilisation the sepals, petals and stamens of the flower wither and fall off. Can you name a plant in which the sepals remain attached? The pistil however, remains attached to the plant. The zygote develops into the embryo and the ovules develop into the seed. The ovary develops into the fruit which develops a thick wall called pericarp that is protective in function (Figure 1.8). After dispersal, seeds germinate under favourable conditions to produce new plants.", "doc_id": "0e987b28-4b17-11ed-8afd-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "When once they are shed, pollen grains have to land on the stigma before they lose viability if they have to bring about fertilisation. How long do you think the pollen grains retain viability? The period for which pollen grains remain viable is highly variable and to some extent depends on the prevailing temperature and humidity. In some cereals such as rice and wheat, pollen grains lose viability within 30 minutes of their release, and in some members of Rosaceae, Leguminoseae and Solanaceae, they maintain viability for months. You may have heard of storing semen/sperms of many animals including humans for artificial insemination. It is possible to store pollen grains of a large number of species for years in liquid nitrogen (-196 Celcius). Such stored pollen can be used as pollen banks, similar to seed banks, in crop breeding programmes.\n\nThe gynoecium represents the female reproductive part of the flower. The gynoecium may consist of a single pistil (monocarpellary) or may have more than one pistil (multicarpellary). When there are more than one, the pistils may be fused together (syncarpous) (Figure 2.7b) or may be free (apocarpous) (Figure 2.7c). Each pistil has three parts (Figure 2.7a), the stigma, style and ovary. The stigma serves as a landing platform for pollen grains. The style is the elongated slender part beneath the stigma. The basal bulged part of the pistil is the ovary. Inside the ovary is the ovarian cavity (locule). The placenta is located inside the ovarian cavity. Arising from the placenta are the megasporangia, commonly called ovules. The number of ovules in an ovary may be one (wheat, paddy, mango) to many (papaya, water melon, orchids).\n\nThe Megasporangium (Ovule) : Let us familiarise ourselves with the structure of a typical angiosperm ovule (Figure 2.7d). The ovule is a small structure attached to the placenta by means of a stalk called funicle. The body of the ovule fuses with funicle in the region called hilum. Thus, hilum represents the junction between ovule and funicle. Each ovule has one or two protective envelopes called integuments. Integuments encircle the nucellus except at the tip where a small opening called the micropyle is organised. Opposite the micropylar end, is the chalaza, representing the basal part of the ovule. Enclosed within the integuments is a mass of cells called the nucellus. Cells of the nucellus have abundant reserve food materials. Located in the nucellus is the embryo sac or female gametophyte. An ovule generally has a single embryo sac formed from a megaspore. \n\nMegasporogenesis: The process of formation of megaspores from the megaspore mother cell is called megasporogenesis. Ovules generally differentiate a single megaspore mother cell (MMC) in the micropylar region of the nucellus. It is a large cell containing dense cytoplasm and a prominent nucleus. The MMC undergoes meiotic division. What is the importance of the MMC undergoing meiosis? Meiosis results in the production of four megaspores (Figure 2.8a). \n\nFemale gametophyte: In a majority of flowering plants, one of the megaspores is functional while the other three degenerate. Only the functional megaspore develops into the female gametophyte (embryo sac). This method of embryo sac formation from a single megaspore is termed monosporic development. What will be the ploidy of the cells of the nucellus, MMC, the functional megaspore and female gametophyte?\n\nLet us study formation of the embryo sac in a little more detail. The nucleus of the functional megaspore divides mitotically to form two nuclei which move to the opposite poles, forming the 2-nucleate embryo sac. Two more sequential mitotic nuclear divisions result in the formation of the 4-nucleate and later the 8-nucleate stages of the embryo sac. It is of interest to note that these mitotic divisions are strictly free nuclear, that is, nuclear divisions are not followed immediately by cell wall formation. After the 8-nucleate stage, cell walls are laid down leading to the organisation of the typical female gametophyte or embryo sac. Observe the distribution of cells inside the embryo sac (Figure 2.8b, c). Six of the eight nuclei are surrounded by cell walls and organised into cells; the remaining two nuclei, called polar nuclei are situated below the egg apparatus in the large central cell. There is a characteristic distribution of the cells within the embryo sac. Three cells are grouped together at the micropylar end and constitute the egg apparatus. The egg apparatus, in turn, consists of two synergids and one egg cell. The synergids have special cellular thickenings at the micropylar tip called filiform apparatus, which play an important role in guiding the pollen tubes into the synergid. Three cells are at the chalazal end and are called the antipodals. The large central cell, as mentioned earlier, has two polar nuclei. Thus, a typical angiosperm embryo sac, at maturity, though 8-nucleate is 7-celled.", "doc_id": "459d07ec-4b21-11ed-93c6-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "In the preceding sections you have learnt that the male and female gametes in flowering plants are produced in the pollen grain and embryo sac, respectively. As both types of gametes are non-motile, they have to be brought together for fertilisation to occur. How is this achieved? Pollination is the mechanism to achieve this objective. Transfer of pollen grains (shed from the anther) to the stigma of a pistil is termed pollination. Flowering plants have evolved an amazing array of adaptations to achieve pollination. They make use of external agents to achieve pollination.\n\nKinds of Pollination : Depending on the source of pollen, pollination can be divided into three types.\n(i) Autogamy: In this type, pollination is achieved within the same flower. Transfer of pollen grains from the anther to the stigma of the same flower (Figure 2.9a). In a normal flower which opens and exposes the anthers and the stigma, complete autogamy is rather rare. Autogamy in such flowers requires synchrony in pollen release and stigma receptivity and also, the anthers and the stigma should lie close to each other so that self-pollination can occur. Some plants such as Viola (common pansy), Oxalis, and Commelina produce two types of flowers \u2013 chasmogamous flowers which are similar to flowers of other species with exposed anthers and stigma, and cleistogamous flowers which do not open at all. In such flowers, the anthers and stigma lie close to each other. When anthers dehisce in the flower buds, pollen grains come in contact with the stigma to effect pollination. Thus, cleistogamous flowers are invariably autogamous as there is no chance of cross-pollen landing on the stigma. Cleistogamous flowers produce assured seed-set even in the absence of pollinators. Do you think that cleistogamy is advantageous or disadvantageous to the plant?\n(ii) Geitonogamy \u2013 Transfer of pollen grains from the anther to the stigma of another flower of the same plant. Although geitonogamy is functionally cross-pollination involving a pollinating agent, genetically it is similar to autogamy since the pollen grains come from the same plant.\n(iii) Xenogamy \u2013 Transfer of pollen grains from anther to the stigma of a different plant (Figure 2.9b). This is the only type of pollination which during pollination brings genetically different types of pollen grains to the stigma.\n\nAgents of Pollination : Plants use two abiotic (wind and water) and one biotic (animals) agents to achieve pollination. Majority of plants use biotic agents for pollination. Only a small proportion of plants use abiotic agents. Pollen grains coming in contact with the stigma is a chance factor in both wind and water pollination. To compensate for this uncertainties and associated loss of pollen grains, the flowers produce enormous amount of pollen when compared to the number of ovules available for pollination.\n\nPollination by wind is more common amongst abiotic pollinations. Wind pollination also requires that the pollen grains are light and non-sticky so that they can be transported in wind currents. They often possess well-exposed stamens (so that the pollens are easily dispersed into wind currents) and large often-feathery stigma to easily trap air-borne pollen grains. Wind\u0002pollinated flowers often have a single ovule in each ovary and numerous flowers packed into an inflorescence; a familiar example is the corn cob \u2013 the tassels you see are nothing but the stigma and style which wave in the wind to trap pollen grains. Wind-pollination is quite common in grasses. \n\nPollination by water is quite rare in flowering plants and is limited to about 30 genera, mostly monocotyledons. As against this, you would recall that water is a regular mode of transport for the male gametes among the lower plant groups such as algae, bryophytes and pteridophytes. It is believed, particularly for some bryophytes and pteridophytes, that their distribution is limited because of the need for water for the transport of male gametes and fertilisation. Some examples of water pollinated plants are Vallisneria and Hydrilla which grow in fresh water and several marine sea-grasses such as Zostera. Not all aquatic plants use water for pollination. In a majority of aquatic plants such as water hyacinth and water lily, the flowers emerge above the level of water and are pollinated by insects or wind as in most of the land plants. In Vallisneria, the female flower reach the surface of water by the long stalk and the male flowers or pollen grains are released on to the surface of water. They are carried passively by water currents (Figure 2.11a); some of them eventually reach the female flowers and the stigma. In another group of water pollinated plants such as seagrasses, female flowers remain submerged in water and the pollen grains are released inside the water. Pollen grains in many such species are long, ribbon like and they are carried passively inside the water; some of them reach the stigma and achieve pollination. In most of the water-pollinated species, pollen grains are protected from wetting by a mucilaginous covering. Both wind and water pollinated flowers are not very colourful and do not produce nectar.", "doc_id": "d3f97fa2-4b21-11ed-bd2c-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Majority of insect-pollinated flowers are large, colourful, fragrant and rich in nectar. When the flowers are small, a number of flowers are clustered into an inflorescence to make them conspicuous. Animals are attracted to flowers by colour and/or fragrance. The flowers pollinated by flies and beetles secrete foul odours to attract these animals. To sustain animal visits, the flowers have to provide rewards to the animals. Nectar and pollen grains are the usual floral rewards. For harvesting the reward(s) from the flower the animal visitor comes in contact with the anthers and the stigma. The body of the animal gets a coating of pollen grains, which are generally sticky in animal pollinated flowers. When the animal carrying pollen on its body comes in contact with the stigma, it brings about pollination.\n\nIn some species floral rewards are in providing safe places to lay eggs; an example is that of the tallest flower of Amorphophallus (the flower itself is about 6 feet in height). A similar relationship exists between a species of moth and the plant Yucca where both species \u2013 moth and the plant \u2013 cannot complete their life cycles without each other. The moth deposits its eggs in the locule of the ovary and the flower, in turn, gets pollinated by the moth. The larvae of the moth come out of the eggs as the seeds start developing.\n\nOutbreeding Devices : Majority of flowering plants produce hermaphrodite flowers and pollen grains are likely to come in contact with the stigma of the same flower. Continued self-pollination result in inbreeding depression. Flowering plants have developed many devices to discourage self\u0002pollination and to encourage cross-pollination. In some species, pollen release and stigma receptivity are not synchronised. Either the pollen is released before the stigma becomes receptive or stigma becomes receptive much before the release of pollen. In some other species, the anther and stigma are placed at different positions so that the pollen cannot come in contact with the stigma of the same flower. Both these devices prevent autogamy. The third device to prevent inbreeding is self-incompatibility. This is a genetic mechanism and prevents self-pollen (from the same flower or other flowers of the same plant) from fertilising the ovules by inhibiting pollen germination or pollen tube growth in the pistil. Another device to prevent self-pollination is the production of unisexual flowers. If both male and female flowers are present on the same plant such as castor and maize (monoecious), it prevents autogamy but not geitonogamy. In several species such as papaya, male and female flowers are present on different plants, that is each plant is either male or female (dioecy). This condition prevents both autogamy and geitonogamy.\n\nPollen-pistil Interaction : Pollination does not guarantee the transfer of the right type of pollen (compatible pollen of the same species as the stigma). Often, pollen of the wrong type, either from other species or from the same plant (if it is self-incompatible), also land on the stigma. The pistil has the ability to recognise the pollen, whether it is of the right type (compatible) or of the wrong type (incompatible). If it is of the right type, the pistil accepts the pollen and promotes post-pollination events that leads to fertilisation. If the pollen is of the wrong type, the pistil rejects the pollen by preventing pollen germination on the stigma or the pollen tube growth in the style. The ability of the pistil to recognise the pollen followed by its acceptance or rejection is the result of a continuous dialogue between pollen grain and the pistil. This dialogue is mediated by chemical components of the pollen interacting with those of the pistil. It is only in recent years that botanists have been able to identify some of the pollen and pistil components and the interactions leading to the recognition, followed by acceptance or rejection.", "doc_id": "91cf4966-4b23-11ed-989d-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Outbreeding Devices : Majority of flowering plants produce hermaphrodite flowers and pollen grains are likely to come in contact with the stigma of the same flower. Continued self-pollination result in inbreeding depression. Flowering plants have developed many devices to discourage selfpollination and to encourage cross-pollination. In some species, pollen release and stigma receptivity are not synchronised. Either the pollen is released before the stigma becomes receptive or stigma becomes receptive much before the release of pollen. In some other species, the anther and stigma are placed at different positions so that the pollen cannot come in contact with the stigma of the same flower. Both these devices prevent autogamy. The third device to prevent inbreeding is self-incompatibility. This is a genetic mechanism and prevents self-pollen (from the same flower or other flowers of the same plant) from fertilising the ovules by inhibiting pollen germination or pollen tube growth in the pistil. Another device to prevent self-pollination is the production of unisexual flowers. If both male and female flowers are present on the same plant such as castor and maize (monoecious), it prevents autogamy but not geitonogamy. In several species such as papaya, male and female flowers are present on different plants, that is each plant is either male or female (dioecy). This condition prevents both autogamy and geitonogamy.\n\nPollen-pistil Interaction : Pollination does not guarantee the transfer of the right type of pollen (compatible pollen of the same species as the stigma). Often, pollen of the wrong type, either from other species or from the same plant (if it is self-incompatible), also land on the stigma. The pistil has the ability to recognise the pollen, whether it is of the right type (compatible) or of the wrong type (incompatible). If it is of the right type, the pistil accepts the pollen and promotes post-pollination events that leads to fertilisation. If the pollen is of the wrong type, the pistil rejects the pollen by preventing pollen germination on the stigma or the pollen tube growth in the style. The ability of the pistil to recognise the pollen followed by its acceptance or rejection is the result of a continuous dialogue between pollen grain and the pistil. This dialogue is mediated by chemical components of the pollen interacting with those of the pistil. It is only in recent years that botanists have been able to identify some of the pollen and pistil components and the interactions leading to the recognition, followed by acceptance or rejection.\n\nAs mentioned earlier, following compatible pollination, the pollen grain germinates on the stigma to produce a pollen tube through one of the germ pores (Figure 2.12a). The contents of the pollen grain move into the pollen tube. Pollen tube grows through the tissues of the stigma and style and reaches the ovary (Figure 2.12b, c). You would recall that in some plants, pollen grains are shed at two-celled condition (a vegetative cell and a generative cell). In such plants, the generative cell divides and forms the two male gametes during the growth of pollen tube in the stigma. In plants which shed pollen in the three-celled condition, pollen tubes carry the two male gametes from the beginning. Pollen tube, after reaching the ovary, enters the ovule through the micropyle and then enters one of the synergids through the filiform apparatus (Figure 2.12d, e). Many recent studies have shown that filiform apparatus present at the micropylar part of the synergids guides the entry of pollen tube. All these events\u2013from pollen deposition on the stigma until pollen tubes enter the ovule\u2013are together referred to as pollen-pistil interaction. As pointed out earlier, pollen-pistil interaction is a dynamic process involving pollen recognition followed by promotion or inhibition of the pollen. The knowledge gained in this area would help the plant breeder in manipulating pollen-pistil interaction, even in incompatible pollinations, to get desired hybrids.\n\nAs you shall learn in the chapter on plant breeding (Chapter 9), a breeder is interested in crossing different species and often genera to combine desirable characters to produce commercially \u2018superior\u2019 varieties. Artificial hybridisation is one of the major approaches of crop improvement programme. In such crossing experiments it is important to make sure that only the desired pollen grains are used for pollination and the stigma is protected from contamination (from unwanted pollen). This is achieved by emasculation and bagging techniques. \n\nIf the female parent bears bisexual flowers, removal of anthers from the flower bud before the anther dehisces using a pair of forceps is necessary. This step is referred to as emasculation. Emasculated flowers have to be covered with a bag of suitable size, generally made up of butter paper, to prevent contamination of its stigma with unwanted pollen. This process is called bagging. When the stigma of bagged flower attains receptivity, mature pollen grains collected from anthers of the male parent are dusted on the stigma, and the flowers are rebagged, and the fruits allowed to develop. \n\nIf the female parent produces unisexual flowers, there is no need for emasculation. The female flower buds are bagged before the flowers open. When the stigma becomes receptive, pollination is carried out using the desired pollen and the flower rebagged.", "doc_id": "30befb7a-4b92-11ed-a182-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Outbreeding Devices : Majority of flowering plants produce hermaphrodite flowers and pollen grains are likely to come in contact with the stigma of the same flower. Continued self-pollination result in inbreeding depression. Flowering plants have developed many devices to discourage selfpollination and to encourage cross-pollination. In some species, pollen release and stigma receptivity are not synchronised. Either the pollen is released before the stigma becomes receptive or stigma becomes receptive much before the release of pollen. In some other species, the anther and stigma are placed at different positions so that the pollen cannot come in contact with the stigma of the same flower. Both these devices prevent autogamy. The third device to prevent inbreeding is self-incompatibility. This is a genetic mechanism and prevents self-pollen (from the same flower or other flowers of the same plant) from fertilising the ovules by inhibiting pollen germination or pollen tube growth in the pistil. Another device to prevent self-pollination is the production of unisexual flowers. If both male and female flowers are present on the same plant such as castor and maize (monoecious), it prevents autogamy but not geitonogamy. In several species such as papaya, male and female flowers are present on different plants, that is each plant is either male or female (dioecy). This condition prevents both autogamy and geitonogamy.\n\nPollen-pistil Interaction : Pollination does not guarantee the transfer of the right type of pollen (compatible pollen of the same species as the stigma). Often, pollen of the wrong type, either from other species or from the same plant (if it is self-incompatible), also land on the stigma. The pistil has the ability to recognise the pollen, whether it is of the right type (compatible) or of the wrong type (incompatible). If it is of the right type, the pistil accepts the pollen and promotes post-pollination events that leads to fertilisation. If the pollen is of the wrong type, the pistil rejects the pollen by preventing pollen germination on the stigma or the pollen tube growth in the style. The ability of the pistil to recognise the pollen followed by its acceptance or rejection is the result of a continuous dialogue between pollen grain and the pistil. This dialogue is mediated by chemical components of the pollen interacting with those of the pistil. It is only in recent years that botanists have been able to identify some of the pollen and pistil components and the interactions leading to the recognition, followed by acceptance or rejection.\n\nAs mentioned earlier, following compatible pollination, the pollen grain germinates on the stigma to produce a pollen tube through one of the germ pores (Figure 2.12a). The contents of the pollen grain move into the pollen tube. Pollen tube grows through the tissues of the stigma and style and reaches the ovary (Figure 2.12b, c). You would recall that in some plants, pollen grains are shed at two-celled condition (a vegetative cell and a generative cell). In such plants, the generative cell divides and forms the two male gametes during the growth of pollen tube in the stigma. In plants which shed pollen in the three-celled condition, pollen tubes carry the two male gametes from the beginning. Pollen tube, after reaching the ovary, enters the ovule through the micropyle and then enters one of the synergids through the filiform apparatus (Figure 2.12d, e). Many recent studies have shown that filiform apparatus present at the micropylar part of the synergids guides the entry of pollen tube. All these events\u2013from pollen deposition on the stigma until pollen tubes enter the ovule\u2013are together referred to as pollen-pistil interaction. As pointed out earlier, pollen-pistil interaction is a dynamic process involving pollen recognition followed by promotion or inhibition of the pollen. The knowledge gained in this area would help the plant breeder in manipulating pollen-pistil interaction, even in incompatible pollinations, to get desired hybrids.\n\nAs you shall learn in the chapter on plant breeding (Chapter 9), a breeder is interested in crossing different species and often genera to combine desirable characters to produce commercially \u2018superior\u2019 varieties. Artificial hybridisation is one of the major approaches of crop improvement programme. In such crossing experiments it is important to make sure that only the desired pollen grains are used for pollination and the stigma is protected from contamination (from unwanted pollen). This is achieved by emasculation and bagging techniques. \n\nIf the female parent bears bisexual flowers, removal of anthers from the flower bud before the anther dehisces using a pair of forceps is necessary. This step is referred to as emasculation. Emasculated flowers have to be covered with a bag of suitable size, generally made up of butter paper, to prevent contamination of its stigma with unwanted pollen. This process is called bagging. When the stigma of bagged flower attains receptivity, mature pollen grains collected from anthers of the male parent are dusted on the stigma, and the flowers are rebagged, and the fruits allowed to develop. \n\nIf the female parent produces unisexual flowers, there is no need for emasculation. The female flower buds are bagged before the flowers open. When the stigma becomes receptive, pollination is carried out using the desired pollen and the flower rebagged.", "doc_id": "15694a00-4b93-11ed-a55b-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Outbreeding Devices : Majority of flowering plants produce hermaphrodite flowers and pollen grains are likely to come in contact with the stigma of the same flower. Continued self-pollination result in inbreeding depression. Flowering plants have developed many devices to discourage selfpollination and to encourage cross-pollination. In some species, pollen release and stigma receptivity are not synchronised. Either the pollen is released before the stigma becomes receptive or stigma becomes receptive much before the release of pollen. In some other species, the anther and stigma are placed at different positions so that the pollen cannot come in contact with the stigma of the same flower. Both these devices prevent autogamy. The third device to prevent inbreeding is self-incompatibility. This is a genetic mechanism and prevents self-pollen (from the same flower or other flowers of the same plant) from fertilising the ovules by inhibiting pollen germination or pollen tube growth in the pistil. Another device to prevent self-pollination is the production of unisexual flowers. If both male and female flowers are present on the same plant such as castor and maize (monoecious), it prevents autogamy but not geitonogamy. In several species such as papaya, male and female flowers are present on different plants, that is each plant is either male or female (dioecy). This condition prevents both autogamy and geitonogamy.\n\nPollen-pistil Interaction : Pollination does not guarantee the transfer of the right type of pollen (compatible pollen of the same species as the stigma). Often, pollen of the wrong type, either from other species or from the same plant (if it is self-incompatible), also land on the stigma. The pistil has the ability to recognise the pollen, whether it is of the right type (compatible) or of the wrong type (incompatible). If it is of the right type, the pistil accepts the pollen and promotes post-pollination events that leads to fertilisation. If the pollen is of the wrong type, the pistil rejects the pollen by preventing pollen germination on the stigma or the pollen tube growth in the style. The ability of the pistil to recognise the pollen followed by its acceptance or rejection is the result of a continuous dialogue between pollen grain and the pistil. This dialogue is mediated by chemical components of the pollen interacting with those of the pistil. It is only in recent years that botanists have been able to identify some of the pollen and pistil components and the interactions leading to the recognition, followed by acceptance or rejection.\n\nAs mentioned earlier, following compatible pollination, the pollen grain germinates on the stigma to produce a pollen tube through one of the germ pores (Figure 2.12a). The contents of the pollen grain move into the pollen tube. Pollen tube grows through the tissues of the stigma and style and reaches the ovary (Figure 2.12b, c). You would recall that in some plants, pollen grains are shed at two-celled condition (a vegetative cell and a generative cell). In such plants, the generative cell divides and forms the two male gametes during the growth of pollen tube in the stigma. In plants which shed pollen in the three-celled condition, pollen tubes carry the two male gametes from the beginning. Pollen tube, after reaching the ovary, enters the ovule through the micropyle and then enters one of the synergids through the filiform apparatus (Figure 2.12d, e). Many recent studies have shown that filiform apparatus present at the micropylar part of the synergids guides the entry of pollen tube. All these events\u2013from pollen deposition on the stigma until pollen tubes enter the ovule\u2013are together referred to as pollen-pistil interaction. As pointed out earlier, pollen-pistil interaction is a dynamic process involving pollen recognition followed by promotion or inhibition of the pollen. The knowledge gained in this area would help the plant breeder in manipulating pollen-pistil interaction, even in incompatible pollinations, to get desired hybrids.\n\nAs you shall learn in the chapter on plant breeding (Chapter 9), a breeder is interested in crossing different species and often genera to combine desirable characters to produce commercially \u2018superior\u2019 varieties. Artificial hybridisation is one of the major approaches of crop improvement programme. In such crossing experiments it is important to make sure that only the desired pollen grains are used for pollination and the stigma is protected from contamination (from unwanted pollen). This is achieved by emasculation and bagging techniques. \n\nIf the female parent bears bisexual flowers, removal of anthers from the flower bud before the anther dehisces using a pair of forceps is necessary. This step is referred to as emasculation. Emasculated flowers have to be covered with a bag of suitable size, generally made up of butter paper, to prevent contamination of its stigma with unwanted pollen. This process is called bagging. When the stigma of bagged flower attains receptivity, mature pollen grains collected from anthers of the male parent are dusted on the stigma, and the flowers are rebagged, and the fruits allowed to develop. \n\nIf the female parent produces unisexual flowers, there is no need for emasculation. The female flower buds are bagged before the flowers open. When the stigma becomes receptive, pollination is carried out using the desired pollen and the flower rebagged.", "doc_id": "228cf1f0-4b93-11ed-815c-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "As ovules mature into seeds, the ovary develops into a fruit, i.e., the transformation of ovules into seeds and ovary into fruit proceeds simultaneously. The wall of the ovary develops into the wall of fruit called pericarp. The fruits may be fleshy as in guava, orange, mango, etc., or may be dry, as in groundnut, and mustard, etc. Many fruits have evolved mechanisms for dispersal of seeds. Recall the classification of fruits and their dispersal mechanisms that you have studied in an earlier class. Is there any relationship between number of ovules in an ovary and the number of seeds present in a fruit?\n\nIn most plants, by the time the fruit develops from the ovary, other floral parts degenerate and fall off. However, in a few species such as apple, strawberry, cashew, etc., the thalamus also contributes to fruit formation. Such fruits are called false fruits (Figure 2.15b). Most fruits however develop only from the ovary and are called true fruits. Although in most of the species, fruits are the results of fertilisation, there are a few species in which fruits develop without fertilisation. Such fruits are called parthenocarpic fruits. Banana is one such example. Parthenocarpy can be induced through the application of growth hormones and such fruits are seedless.\n\nSeeds offer several advantages to angiosperms. Firstly, since reproductive processes such as pollination and fertilisation are independent of water, seed formation is more dependable. Also seeds have better adaptive strategies for dispersal to new habitats and help the species to colonise in other areas. As they have sufficient food reserves, young seedlings are nourished until they are capable of photosynthesis on their own. The hard seed coat provides protection to the young embryo. Being products of sexual reproduction, they generate new genetic combinations leading to variations.\n\nSeed is the basis of our agriculture. Dehydration and dormancy of mature seeds are crucial for storage of seeds which can be used as food throughout the year and also to raise crop in the next season. Can you imagine agriculture in the absence of seeds, or in the presence of seeds which germinate straight away soon after formation and cannot be stored? How long do the seeds remain alive after they are dispersed? This period again varies greatly. In a few species the seeds lose viability within a few months. Seeds of a large number of species live for several years. Some seeds can remain alive for hundreds of years. There are several records of very old yet viable seeds. The oldest is that of a lupine, Lupinus arcticus excavated from Arctic Tundra. The seed germinated and flowered after an estimated record of 10,000 years of dormancy. A recent record of 2000 years old viable seed is of the date palm, Phoenix dactylifera discovered during the archeological excavation at King Herod\u2019s palace near the Dead Sea.\n\nAlthough seeds, in general are the products of fertilisation, a few flowering plants such as some species of Asteraceae and grasses, have evolved a special mechanism, to produce seeds without fertilisation, called apomixis. What is fruit production without fertilisation called? Thus, apomixis is a form of asexual reproduction that mimics sexual reproduction. There are several ways of development of apomictic seeds. In some species, the diploid egg cell is formed without reduction division and develops into the embryo without fertilisation. More often, as in many Citrus and Mango varieties some of the nucellar cells surrounding the embryo sac start dividing, protrude into the embryo sac and develop into the embryos. In such species each ovule contains many embryos. Occurrence of more than one embryo in a seed is referred to as polyembryony.\n\nHybrid varieties of several of our food and vegetable crops are being extensively cultivated. Cultivation of hybrids has tremendously increased productivity. One of the problems of hybrids is that hybrid seeds have to be produced every year. If the seeds collected from hybrids are sown, the plants in the progeny will segregate and do not maintain hybrid characters. Production of hybrid seeds is costly and hence the cost of hybrid seeds become too expensive for the farmers. If these hybrids are made into apomicts, there is no segregation of characters in the hybrid progeny. Then the farmers can keep on using the hybrid seeds to raise new crop year after year and he does not have to buy hybrid seeds every year. Because of the importance of apomixis in hybrid seed industry, active research is going on in many laboratories around the world to understand the genetics of apomixis and to transfer apomictic genes into hybrid varieties.", "doc_id": "a73dfa48-4b93-11ed-96f6-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "As ovules mature into seeds, the ovary develops into a fruit, i.e., the transformation of ovules into seeds and ovary into fruit proceeds simultaneously. The wall of the ovary develops into the wall of fruit called pericarp. The fruits may be fleshy as in guava, orange, mango, etc., or may be dry, as in groundnut, and mustard, etc. Many fruits have evolved mechanisms for dispersal of seeds. Recall the classification of fruits and their dispersal mechanisms that you have studied in an earlier class. Is there any relationship between number of ovules in an ovary and the number of seeds present in a fruit?\n\nIn most plants, by the time the fruit develops from the ovary, other floral parts degenerate and fall off. However, in a few species such as apple, strawberry, cashew, etc., the thalamus also contributes to fruit formation. Such fruits are called false fruits (Figure 2.15b). Most fruits however develop only from the ovary and are called true fruits. Although in most of the species, fruits are the results of fertilisation, there are a few species in which fruits develop without fertilisation. Such fruits are called parthenocarpic fruits. Banana is one such example. Parthenocarpy can be induced through the application of growth hormones and such fruits are seedless.\n\nSeeds offer several advantages to angiosperms. Firstly, since reproductive processes such as pollination and fertilisation are independent of water, seed formation is more dependable. Also seeds have better adaptive strategies for dispersal to new habitats and help the species to colonise in other areas. As they have sufficient food reserves, young seedlings are nourished until they are capable of photosynthesis on their own. The hard seed coat provides protection to the young embryo. Being products of sexual reproduction, they generate new genetic combinations leading to variations.\n\nSeed is the basis of our agriculture. Dehydration and dormancy of mature seeds are crucial for storage of seeds which can be used as food throughout the year and also to raise crop in the next season. Can you imagine agriculture in the absence of seeds, or in the presence of seeds which germinate straight away soon after formation and cannot be stored? How long do the seeds remain alive after they are dispersed? This period again varies greatly. In a few species the seeds lose viability within a few months. Seeds of a large number of species live for several years. Some seeds can remain alive for hundreds of years. There are several records of very old yet viable seeds. The oldest is that of a lupine, Lupinus arcticus excavated from Arctic Tundra. The seed germinated and flowered after an estimated record of 10,000 years of dormancy. A recent record of 2000 years old viable seed is of the date palm, Phoenix dactylifera discovered during the archeological excavation at King Herod\u2019s palace near the Dead Sea.\n\nAlthough seeds, in general are the products of fertilisation, a few flowering plants such as some species of Asteraceae and grasses, have evolved a special mechanism, to produce seeds without fertilisation, called apomixis. What is fruit production without fertilisation called? Thus, apomixis is a form of asexual reproduction that mimics sexual reproduction. There are several ways of development of apomictic seeds. In some species, the diploid egg cell is formed without reduction division and develops into the embryo without fertilisation. More often, as in many Citrus and Mango varieties some of the nucellar cells surrounding the embryo sac start dividing, protrude into the embryo sac and develop into the embryos. In such species each ovule contains many embryos. Occurrence of more than one embryo in a seed is referred to as polyembryony.\n\nHybrid varieties of several of our food and vegetable crops are being extensively cultivated. Cultivation of hybrids has tremendously increased productivity. One of the problems of hybrids is that hybrid seeds have to be produced every year. If the seeds collected from hybrids are sown, the plants in the progeny will segregate and do not maintain hybrid characters. Production of hybrid seeds is costly and hence the cost of hybrid seeds become too expensive for the farmers. If these hybrids are made into apomicts, there is no segregation of characters in the hybrid progeny. Then the farmers can keep on using the hybrid seeds to raise new crop year after year and he does not have to buy hybrid seeds every year. Because of the importance of apomixis in hybrid seed industry, active research is going on in many laboratories around the world to understand the genetics of apomixis and to transfer apomictic genes into hybrid varieties.", "doc_id": "b6537cc4-4b93-11ed-aae5-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Let us take the example of one such\nhybridisation experiment carried out by\nMendel where he crossed tall and dwarf pea\nplants to study the inheritance of one gene\n(Figure 5.2). He collected the seeds produced\nas a result of this cross and grew them to\ngenerate plants of the first hybrid generation.\nThis generation is also called the Filial1\nprogeny or the F1. Mendel observed that all the F1 progeny plants were tall, like one of its parents; none were dwarf (Figure 5.3). He made similar observations for the other pairs of traits \u2013 he found that the F1 always resembled either one of the parents, and that the trait of the other parent was not seen in them.\n\nMendel then self-pollinated the tall F1 plants and to his surprise found that in the Filial2 generation some of the offspring were \u2018dwarf\u2019; the character that was not seen in the F1 generation was now expressed. The proportion of plants that were dwarf were 1/4th of the F2 plants while 3/4th of the F2 plants were tall. The tall and dwarf traits were identical to their parental type and did not show any blending, that is all the offspring were either tall or dwarf, none were of in\u0002between height (Figure 5.3).\n\nSimilar results were obtained with the other traits that he studied: only one of the parental traits was expressed in the F1 generation while at the F2 stage both the traits were expressed in the proportion 3:1. The contrasting traits did not show any blending at either F1 or F2 stage.\n\nBased on these observations, Mendel proposed that something was being stably passed down, unchanged, from parent to offspring through the gametes, over successive generations. He called these things as \u2018factors\u2019. Now we call them as genes. Genes, therefore, are the units of inheritance. They contain the information that is required to express a particular trait in an organism. Genes which code for a pair of contrasting traits are known as alleles, i.e., they are slightly different forms of the same gene.\n\nIf we use alphabetical symbols for each gene, then the capital letter is used for the trait expressed at the F1 stage and the small alphabet for the other trait. For example, in case of the character of height, T is used for the Tall trait and t for the \u2018dwarf\u2019, and T and t are alleles of each other. Hence, in plants the pair of alleles for height would be TT, Tt or tt. Mendel also proposed that in a true breeding, tall or dwarf pea variety the allelic pair of genes for height are identical or homozygous, TT and tt, respectively. TT and tt are called the genotype of the plant while the descriptive terms tall and dwarf are the phenotype. What then would be the phenotype of a plant that had a genotype Tt?\n\nAs Mendel found the phenotype of the F1 heterozygote Tt to be exactly like the TT parent in appearance, he proposed that in a pair of dissimilar factors, one dominates the other (as in the F1 ) and hence is called the dominant factor while the other factor is recessive . In this case T (for tallness) is dominant over t (for dwarfness), that is recessive. He observed identical behaviour for all the other characters/trait-pairs that he studied. It is convenient (and logical) to use the capital and lower case of an alphabetical symbol to remember this concept of dominance and recessiveness. (Do not use T for tall and d for dwarf because you will find it difficult to remember whether T and d are alleles of the same gene/character or not). Alleles can be similar as in the case of homozygotes TT and tt or can be dissimilar as in the case of the heterozygote Tt. Since the Tt plant is heterozygous for genes controlling one character (height), it is a monohybrid and the cross between TT and tt is a monohybrid cross. \n\nFrom the observation that the recessive parental trait is expressed without any blending in the F2 generation, we can infer that, when the tall and dwarf plant produce gametes, by the process of meiosis, the alleles of the parental pair separate or segregate from each other and only one allele is transmitted to a gamete. This segregation of alleles is a random process and so there is a 50 per cent chance of a gamete containing either allele, as has been verified by the results of the crossings. In this way the gametes of the tall TT plants have the allele T and the gametes of the dwarf tt plants have the allele t. During fertilisation the two alleles, T from one parent say, through the pollen, and t from the other parent, then through the egg, are united to produce zygotes that have one T allele and one t allele. In other words the hybrids have Tt. Since these hybrids contain alleles which express contrasting traits, the plants are heterozygous. The production of gametes by the parents, the formation of the zygotes, the F1 and F2 plants can be understood from a diagram called Punnett Square as shown in Figure 5.4. It was developed by a British geneticist, Reginald C. Punnett. It is a graphical representation to calculate the probability of all possible genotypes of offspring in a genetic cross. The possible gametes are written on two sides, usually the top row and left columns. All possible combinations are represented in boxes below in the squares, which generates a square output form.", "doc_id": "95c5490a-4b99-11ed-a557-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "The mechanism of sex determination has always been a puzzle before the geneticists. The initial clue about the genetic/chromosomal mechanism of sex determination can be traced back to some of the experiments carried out in insects. In fact, the cytological observations made in a number of insects led to the development of the concept of genetic/chromosomal basis of sex-determination. Henking (1891) could trace a specific nuclear structure all through spermatogenesis in a few insects, and it was also observed by him that 50 per cent of the sperm received this structure after spermatogenesis, whereas the other 50 per cent sperm did not receive it. Henking gave a name to this structure as the X body but he could not explain its significance. Further investigations by other scientists led to the conclusion that the \u2018X body\u2019 of Henking was in fact a chromosome and that is why it was given the name X-chromosome. It was also observed that in a large number of insects the mechanism of sex determination is of the XO type, i.e., all eggs bear an additional X-chromosome besides the other chromosomes (autosomes). On the other hand, some of the sperms bear the X-chromosome whereas some do not. Eggs fertilised by sperm having an X-chromosome become females and, those fertilised by sperms that do not have an X-chromosome become males. Do you think the number of chromosomes in the male and female are equal? Due to the involvement of the X-chromosome in the determination of sex, it was designated to be the sex chromosome, and the rest of the chromosomes were named as autosomes.Grasshopper is an example of XO type of sex determination in which the males have only one X-chromosome besides the autosomes, whereas females have a pair of X-chromosomes.\n\nThese observations led to the investigation of a number of species to understand the mechanism of sex determination. In a number of other insects and mammals including man, XY type of sex determination is seen where both male and female have same number of chromosomes. Among the males an X-chromosome is present but its counter part is distinctly smaller and called the Y-chromosome. Females, however, have a pair of X\u0002chromosomes. Both males and females bear same number of autosomes. Hence, the males have autosomes plus XY, while female have autosomes plus XX. In human beings and in Drosophila the males have one X and one Y chromosome, whereas females have a pair of X-chromosomes besides autosomes (Figure 5.12 a, b). \n\nIn the above description you have studied about two types of sex determining mechanisms, i.e., XO type and XY type. But in both cases males produce two different types of gametes, (a) either with or without X-chromosome or (b) some gametes with X-chromosome and some with Y-chromosome. Such types of sex determination mechanism is designated to be the example of male heterogamety. In some other organisms, e.g., birds, a different mechanism of sex determination is observed (Figure 5.12 c). In this case the total number of chromosome is same in both males and females. But two different types of gametes in terms of the sex chromosomes, are produced by females, i.e., female heterogamety. In order to have a distinction with the mechanism of sex determination described earlier, the two different sex chromosomes of a female bird has been designated to be the Z and W chromosomes. In these organisms the females have one Z and one W chromosome, whereas males have a pair of Z-chromosomes besides the autosomes.\n\nIt has already been mentioned that the sex determining mechanism in case of humans is XY type. Out of 23 pairs of chromosomes present, 22 pairs are exactly same in both males and females; these are the autosomes. A pair of X-chromosomes are present in the female, whereas the presence of an X and Y chromosome are determinant of the male characteristic. During spermatogenesis among males, two types of gametes are produced. 50 per cent of the total sperm produced carry the X-chromosome and the rest 50 per cent has Y-chromosome besides the autosomes. Females, however, produce only one type of ovum with an X-chromosome. There is an equal probability of fertilisation of the ovum with the sperm carrying either X or Y chromosome. In case the ovum fertilises with a sperm carrying X-chromosome the zygote develops into a female (XX) and the fertilisation of ovum with Y-chromosome carrying sperm results into a male offspring. Thus, it is evident that it is the genetic makeup of the sperm that determines the sex of the child. It is also evident that in each pregnancy there is always 50 per cent probability of either a male or a female child. It is unfortunate that in our society women are blamed for giving birth to female children and have been ostracised and ill-treated because of this false notion.", "doc_id": "86f1e860-4b9a-11ed-9099-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Mutation is a phenomenon which results in alteration of DNA sequences and consequently results in changes in the genotype and the phenotype of an organism. In addition to recombination, mutation is another phenomenon that leads to variation in DNA.\n\nAs you will learn in Chapter 6, one DNA helix runs continuously from one end to the other in each chromatid, in a highly supercoiled form. Therefore loss (deletions) or gain (insertion/duplication) of a segment of DNA, result in alteration in chromosomes. Since genes are known to be located on chromosomes, alteration in chromosomes results in abnormalities or aberrations. Chromosomal aberrations are commonly observed in cancer cells. \n\nIn addition to the above, mutation also arise due to change in a single base pair of DNA. This is known as point mutation. A classical example of such a mutation is sickle cell anemia. Deletions and insertions of base pairs of DNA, causes frame-shift mutations.\n\nThe mechanism of mutation is beyond the scope of this discussion, at this level. However, there are many chemical and physical factors that induce mutations. These are referred to as mutagens. UV radiations can cause mutations in organisms \u2013 it is a mutagen.\n\nThe idea that disorders are inherited has been prevailing in the human society since long. This was based on the heritability of certain characteristic features in families. After the rediscovery of Mendel\u2019s work the practice of analysing inheritance pattern of traits in human beings began. Since it is evident that control crosses that can be performed in pea plant or some other organisms, are not possible in case of human beings, study of the family history about inheritance of a particular trait provides an alternative. Such an analysis of traits in a several of generations of a family is called the pedigree analysis. In the pedigree analysis the inheritance of a particular trait is represented in the family tree over generations. In human genetics, pedigree study provides a strong tool, which is utilised to trace the inheritance of a specific trait, abnormality or disease. Some of the important standard symbols used in the pedigree analysis have been shown in Figure 5.13.\n\nAs you have studied in this chapter, each and every feature in any organism is controlled by one or the other gene located on the DNA present in the chromosome. DNA is the carrier of genetic information. It is hence transmitted from one generation to the other without any change or alteration. However, changes or alteration do take place occasionally. Such an alteration or change in the genetic material is referred to as mutation. A number of disorders in human beings have been found to be associated with the inheritance of changed or altered genes or chromosomes.\n\nBroadly, genetic disorders may be grouped into two categories \u2013 Mendelian disorders and Chromosomal disorders. Mendelian disorders are mainly determined by alteration or mutation in the single gene. These disorders are transmitted to the offspring on the same lines as we have studied in the principle of inheritance. The pattern of inheritance of such Mendelian disorders can be traced in a family by the pedigree analysis. Most common and prevalent Mendelian disorders are Haemophilia, Cystic fibrosis, Sickle\u0002cell anaemia, Colour blindness, Phenylketonuria, Thalassemia, etc. It is important to mention here that such Mendelian disorders may be dominant or recessive. By pedigree analysis one can easily understand whether the trait in question is dominant or recessive. Similarly, the trait may also be linked to the sex chromosome as in case of haemophilia. It is evident that this X-linked recessive trait shows transmission from carrier female to male progeny. A representative pedigree is shown in Figure 5.14 for dominant and recessive traits. Discuss with your teacher and design pedigrees for characters linked to both autosomes and sex chromosome.", "doc_id": "fd633d14-4b9a-11ed-8377-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "In addition to the above, mutation also arise due to change in a single base pair of DNA. This is known as point mutation. A classical example of such a mutation is sickle cell anemia. Deletions and insertions of base pairs of DNA, causes frame-shift mutations.\n\nThe mechanism of mutation is beyond the scope of this discussion, at this level. However, there are many chemical and physical factors that induce mutations. These are referred to as mutagens. UV radiations can cause mutations in organisms \u2013 it is a mutagen.\n\nThe idea that disorders are inherited has been prevailing in the human society since long. This was based on the heritability of certain characteristic features in families. After the rediscovery of Mendel\u2019s work the practice of analysing inheritance pattern of traits in human beings began. Since it is evident that control crosses that can be performed in pea plant or some other organisms, are not possible in case of human beings, study of the family history about inheritance of a particular trait provides an alternative. Such an analysis of traits in a several of generations of a family is called the pedigree analysis. In the pedigree analysis the inheritance of a particular trait is represented in the family tree over generations. In human genetics, pedigree study provides a strong tool, which is utilised to trace the inheritance of a specific trait, abnormality or disease. Some of the important standard symbols used in the pedigree analysis have been shown in Figure 5.13.\n\nAs you have studied in this chapter, each and every feature in any organism is controlled by one or the other gene located on the DNA present in the chromosome. DNA is the carrier of genetic information. It is hence transmitted from one generation to the other without any change or alteration. However, changes or alteration do take place occasionally. Such an alteration or change in the genetic material is referred to as mutation. A number of disorders in human beings have been found to be associated with the inheritance of changed or altered genes or chromosomes.\n\nBroadly, genetic disorders may be grouped into two categories \u2013 Mendelian disorders and Chromosomal disorders. Mendelian disorders are mainly determined by alteration or mutation in the single gene. These disorders are transmitted to the offspring on the same lines as we have studied in the principle of inheritance. The pattern of inheritance of such Mendelian disorders can be traced in a family by the pedigree analysis. Most common and prevalent Mendelian disorders are Haemophilia, Cystic fibrosis, Sickle\u0002cell anaemia, Colour blindness, Phenylketonuria, Thalassemia, etc. It is important to mention here that such Mendelian disorders may be dominant or recessive. By pedigree analysis one can easily understand whether the trait in question is dominant or recessive. Similarly, the trait may also be linked to the sex chromosome as in case of haemophilia. It is evident that this X-linked recessive trait shows transmission from carrier female to male progeny. A representative pedigree is shown in Figure 5.14 for dominant and recessive traits. Discuss with your teacher and design pedigrees for characters linked to both autosomes and sex chromosome.\n\nColour Blidness : It is a sex-linked recessive disorder due to defect in either red or green cone of eye resulting in failure to discriminate between red and green colour. This defect is due to mutation in certain genes present in the X chromosome. It occurs in about 8 per cent of males and only about 0.4 per cent of females. This is because the genes that lead to red-green colour blindness are on the X chromosome. Males have only one X chromosome and females have two. The son of a woman who carries the gene has a 50 per cent chance of being colour blind. The mother is not herself colour blind because the gene is recessive. That means that its effect is suppressed by her matching dominant normal gene. A daughter will not normally be colour blind, unless her mother is a carrier and her father is colour blind.\n\nHaemophilia : This sex linked recessive disease, which shows its transmission from unaffected carrier female to some of the male progeny has been widely studied. In this disease, a single protein that is a part of the cascade of proteins involved in the clotting of blood is affected. Due to this, in an affected individual a simple cut will result in non-stop bleeding. The heterozygous female (carrier) for haemophilia may transmit the disease to sons. The possibility of a female becoming a haemophilic is extremely rare because mother of such a female has to be at least carrier and the father should be haemophilic (unviable in the later stage of life). The family pedigree of Queen Victoria shows a number of haemophilic descendents as she was a carrier of the disease.\n\nSickle-cell anaemia : This is an autosome linked recessive trait that can be transmitted from parents to the offspring when both the partners are carrier for the gene (or heterozygous). The disease is controlled by a singlepair of allele, HbA and HbS. Out of the three possible genotypes only homozygous individuals for HbS (HbSHbS) show the diseased phenotype. Heterozygous (HbAHbS) individuals appear apparently unaffected but they are carrier of the disease as there is 50 per cent probability of transmission of the mutant gene to the progeny, thus exhibiting sickle-cell trait (Figure 5.15). The defect is caused by the substitution of Glutamic acid (Glu) by Valine (Val) at the sixth position of the beta globin chain of the haemoglobin molecule. The substitution of amino acid in the globin protein results due to the single base substitution at the sixth codon of the beta globin gene from GAG to GUG. The mutant haemoglobin molecule undergoes polymerisation under low oxygen tension causing the change in the shape of the RBC from biconcave disc to elongated sickle like structure (Figure 5.15).\n\nPhenylketonuria : This inborn error of metabolism is also inherited as the autosomal recessive trait. The affected individual lacks an enzyme that converts the amino acid phenylalanine into tyrosine. As a result of this phenylalanine is accumulated and converted into phenylpyruvic acid and other derivatives. Accumulation of these in brain results in mental retardation. These are also excreted through urine because of its poor absorption by kidney.", "doc_id": "76f35b64-4b9b-11ed-8899-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Embryological support for evolution was also proposed by Ernst Heckel based upon the observation of certain features during embryonic stage common to all vertebrates that are absent in adult. For example, the embryos of all vertebrates including human develop a row of vestigial gill slit just behind the head but it is a functional organ only in fish and not found in any other adult vertebrates. However, this proposal was disapproved on careful study performed by Karl Ernst von Baer. He noted that embryos never pass through the adult stages of other animals. Comparative anatomy and morphology shows similarities and differences among organisms of today and those that existed years ago.\n\nSuch similarities can be interpreted to understand whether common ancestors were shared or not. For example whales, bats, Cheetah and human (all mammals) share similarities in the pattern of bones of forelimbs (Figure 7.3b). Though these forelimbs perform different functions in these animals, they have similar anatomical structure \u2013 all of them have humerus, radius, ulna, carpals, metacarpals and phalanges in their forelimbs. Hence, in these animals, the same structure developed along different directions due to adaptations to different needs. This is divergent evolution and these structures are homologous. Homology indicates common ancestry. Other examples are vertebrate hearts or brains. In plants also, the thorn and tendrils of Bougainvillea and Cucurbita represent homology (Figure 7.3a). Homology is based on divergent evolution whereas analogy refers to a situation exactly opposite. Wings of butterfly and of birds look alike. They are not anatomically similar structures though they perform similar functions. Hence, analogous structures are a result of convergent evolution - different structures evolving for the same function and hence having similarity. Other examples of analogy are the eye of the octopus and of mammals or the flippers of Penguins and Dolphins. One can say that it is the similar habitat that has resulted in selection of similar adaptive features in different groups of organisms but toward the same function: Sweet potato (root modification) and potato (stem modification) is another example for analogy. \n\nIn the same line of argument, similarities in proteins and genes performing a given function among diverse organisms give clues to common ancestry. These biochemical similarities point to the same shared ancestry as structural similarities among diverse organisms.\n\nMan has bred selected plants and animals for agriculture, horticulture, sport or security. Man has domesticated many wild animals and crops. This intensive breeding programme has created breeds that differ from other breeds (e.g., dogs) but still are of the same group. It is argued that if within hundreds of years, man could create new breeds, could not nature have done the same over millions of years?\n\nAnother interesting observation supporting evolution by natural selection comes from England. In a collection of moths made in 1850s, i.e., before industrialisation set in, it was observed that there were more white-winged moths on trees than dark-winged or melanised moths. However, in the collection carried out from the same area, but after industrialisation, i.e., in 1920, there were more dark-winged moths in the same area, i.e., the proportion was reversed.\n\nThe explanation put forth for this observation was that \u2018predators will spot a moth against a contrasting background\u2019. During post\u0002industrialisation period, the tree trunks became dark due to industrial smoke and soots. Under this condition the white-winged moth did not survive due to predators, dark-winged or melanised moth survived. Before industrialisation set in, thick growth of almost white-coloured lichen covered the trees - in that background the white winged moth survived but the dark-coloured moth were picked out by predators. Do you know that lichens can be used as industrial pollution indicators? They will not grow in areas that are polluted. Hence, moths that were able to camouflage themselves, i.e., hide in the background, survived (Figure 7.4). This understanding is supported by the fact that in areas where industrialisation did not occur e.g., in rural areas, the count of melanic moths was low. This showed that in a mixed population, those that can better-adapt, survive and increase in population size. Remember that no variant is completely wiped out.\n\nSimilarly, excess use of herbicides, pesticides, etc., has only resulted in selection of resistant varieties in a much lesser time scale. This is also true for microbes against which we employ antibiotics or drugs against eukaryotic organisms/cell. Hence, resistant organisms/cells are appearing in a time scale of months or years and not centuries. These are examples of evolution by anthropogenic action. This also tells us that evolution is not a directed process in the sense of determinism. It is a stochastic process based on chance events in nature and chance mutation in the organisms.", "doc_id": "5707a838-4b9f-11ed-ae1b-0242ac110007"} {"source": "NCERT XII Biology, India", "document": "Embryological support for evolution was also proposed by Ernst Heckel based upon the observation of certain features during embryonic stage common to all vertebrates that are absent in adult. For example, the embryos of all vertebrates including human develop a row of vestigial gill slit just behind the head but it is a functional organ only in fish and not found in any other adult vertebrates. However, this proposal was disapproved on careful study performed by Karl Ernst von Baer. He noted that embryos never pass through the adult stages of other animals. Comparative anatomy and morphology shows similarities and differences among organisms of today and those that existed years ago.\n\nSuch similarities can be interpreted to understand whether common ancestors were shared or not. For example whales, bats, Cheetah and human (all mammals) share similarities in the pattern of bones of forelimbs (Figure 7.3b). Though these forelimbs perform different functions in these animals, they have similar anatomical structure \u2013 all of them have humerus, radius, ulna, carpals, metacarpals and phalanges in their forelimbs. Hence, in these animals, the same structure developed along different directions due to adaptations to different needs. This is divergent evolution and these structures are homologous. Homology indicates common ancestry. Other examples are vertebrate hearts or brains. In plants also, the thorn and tendrils of Bougainvillea and Cucurbita represent homology (Figure 7.3a). Homology is based on divergent evolution whereas analogy refers to a situation exactly opposite. Wings of butterfly and of birds look alike. They are not anatomically similar structures though they perform similar functions. Hence, analogous structures are a result of convergent evolution - different structures evolving for the same function and hence having similarity. Other examples of analogy are the eye of the octopus and of mammals or the flippers of Penguins and Dolphins. One can say that it is the similar habitat that has resulted in selection of similar adaptive features in different groups of organisms but toward the same function: Sweet potato (root modification) and potato (stem modification) is another example for analogy. \n\nIn the same line of argument, similarities in proteins and genes performing a given function among diverse organisms give clues to common ancestry. These biochemical similarities point to the same shared ancestry as structural similarities among diverse organisms.\n\nMan has bred selected plants and animals for agriculture, horticulture, sport or security. Man has domesticated many wild animals and crops. This intensive breeding programme has created breeds that differ from other breeds (e.g., dogs) but still are of the same group. It is argued that if within hundreds of years, man could create new breeds, could not nature have done the same over millions of years?\n\nAnother interesting observation supporting evolution by natural selection comes from England. In a collection of moths made in 1850s, i.e., before industrialisation set in, it was observed that there were more white-winged moths on trees than dark-winged or melanised moths. However, in the collection carried out from the same area, but after industrialisation, i.e., in 1920, there were more dark-winged moths in the same area, i.e., the proportion was reversed.\n\nThe explanation put forth for this observation was that \u2018predators will spot a moth against a contrasting background\u2019. During post\u0002industrialisation period, the tree trunks became dark due to industrial smoke and soots. Under this condition the white-winged moth did not survive due to predators, dark-winged or melanised moth survived. Before industrialisation set in, thick growth of almost white-coloured lichen covered the trees - in that background the white winged moth survived but the dark-coloured moth were picked out by predators. Do you know that lichens can be used as industrial pollution indicators? They will not grow in areas that are polluted. Hence, moths that were able to camouflage themselves, i.e., hide in the background, survived (Figure 7.4). This understanding is supported by the fact that in areas where industrialisation did not occur e.g., in rural areas, the count of melanic moths was low. This showed that in a mixed population, those that can better-adapt, survive and increase in population size. Remember that no variant is completely wiped out.\n\nSimilarly, excess use of herbicides, pesticides, etc., has only resulted in selection of resistant varieties in a much lesser time scale. This is also true for microbes against which we employ antibiotics or drugs against eukaryotic organisms/cell. Hence, resistant organisms/cells are appearing in a time scale of months or years and not centuries. These are examples of evolution by anthropogenic action. This also tells us that evolution is not a directed process in the sense of determinism. It is a stochastic process based on chance events in nature and chance mutation in the organisms.", "doc_id": "edeadb60-4ba1-11ed-a384-0242ac110007"} {"source": "NCERT XII Political Science, India", "document": "The two superpowers were keen on expanding their spheres of influence in different parts of the world. In a world sharply divided between the two alliance systems, a state was supposed to remain tied to its protective superpower to limit the influence of the other superpower and its allies. The smaller states in the alliances used the link to the superpowers for their own purposes. They got the promise of protection, weapons, and economic aid against their local rivals, mostly regional neighbours with whom they had rivalries. The alliance systems led by the two superpowers, therefore, threatened to divide the entire world into two camps. This division happened first in Europe. Most countries of western Europe sided with the US and those of eastern Europe joined the Soviet camp. That is why these were also called the \u2018western\u2019 and the \u2018eastern\u2019 alliances.\n\nThe western alliance was formalised into an organisation, the North Atlantic Treaty Organisation (NATO), which came into existence in April 1949. It was an association of twelve states which declared that armed attack on any one of them in Europe or North America would be regarded as an attack on all of them. Each of these states would be obliged to help the other. The eastern alliance, known as the Warsaw Pact, was led by the Soviet Union. It was created in 1955 and its principal function was to counter NATO\u2019s forces in Europe.\n\nInternational alliances during the Cold War era were determined by the requirements of the superpowers and the calculations of the smaller states. As noted above, Europe became the main arena of conflict between the superpowers. In some cases, the superpowers used their military power to bring countries into their respective alliances. Soviet intervention in east Europe provides an example. The Soviet Union used its influence in eastern Europe, backed by the very large presence of its armies in the countries of the region, to ensure that the eastern half of Europe remained within its sphere of influence. In East and Southeast Asia and in West Asia (Middle East), the United States built an alliance system called - the Southeast Asian Treaty Organisation (SEATO) and the Central Treaty Organisation (CENTO). The Soviet Union and communist China responded by having close relations with regional countries such as North Vietnam, North Korea and Iraq.\n\nThe Cold War threatened to divide the world into two alliances. Under these circumstances, many of the newly independent countries, after gaining their independence from the colonial powers such as Britain and France, were worried that they would lose their freedom as soon as they gained formal independence. Cracks and splits within the alliances were quick to appear. Communist China quarrelled with the USSR towards the late 1950s, and, in 1969, they fought a brief war over a territorial dispute. The other important development was the Non-Aligned Movement (NAM), which gave the newly independent countries a way of staying out of the alliances. \n\nYou may ask why the superpowers needed any allies at all. After all, with their nuclear weapons and regular armies, they were so powerful that the combined power of most of the smaller states in Asia and Africa, and even in Europe, was no match to that of the superpowers. Yet, the smaller states were helpful for the superpowers in gaining access to\n(i) vital resources, such as oil and minerals,\n(ii) territory, from where the superpowers could launch their weapons and troops,\n(iii) locations from where they could spy on each other, and\n(iv) economic support, in that many small allies together could help pay for military expenses.\n\nThey were also important for ideological reasons. The loyalty of allies suggested that the superpowers were winning the war of ideas as well, that liberal democracy and capitalism were better than socialism and communism, or vice versa.\n\nThe Cuban Missile Crisis that we began this chapter with was only one of the several crises that occurred during the Cold War. The Cold War also led to several shooting wars, but it is important to note that these crises and wars did not lead to another world war. The two superpowers were poised for direct confrontations in Korea (1950 - 53), Berlin (1958 - 62), the Congo (the early 1960s), and in several other places. Crises deepened, as neither of the parties involved was willing to back down. When we talk about arenas of the Cold War, we refer, therefore, to areas where crisis and war occurred or threatened to occur between the alliance systems but did not cross certain limits. A great many lives were lost in some of these arenas like Korea, Vietnam and Afghanistan, but the world was spared a nuclear war and global hostilities. In some cases, huge military build-ups were reported. In many cases, diplomatic communication between the superpowers could not be sustained and contributed to the misunderstandings.", "doc_id": "6216225a-4ba3-11ed-9ca2-0242ac110007"} {"source": "NCERT XII Political Science, India", "document": "They were also important for ideological reasons. The loyalty of allies suggested that the superpowers were winning the war of ideas as well, that liberal democracy and capitalism were better than socialism and communism, or vice versa.\n\nThe Cuban Missile Crisis that we began this chapter with was only one of the several crises that occurred during the Cold War. The Cold War also led to several shooting wars, but it is important to note that these crises and wars did not lead to another world war. The two superpowers were poised for direct confrontations in Korea (1950 - 53), Berlin (1958 - 62), the Congo (the early 1960s), and in several other places. Crises deepened, as neither of the parties involved was willing to back down. When we talk about arenas of the Cold War, we refer, therefore, to areas where crisis and war occurred or threatened to occur between the alliance systems but did not cross certain limits. A great many lives were lost in some of these arenas like Korea, Vietnam and Afghanistan, but the world was spared a nuclear war and global hostilities. In some cases, huge military build-ups were reported. In many cases, diplomatic communication between the superpowers could not be sustained and contributed to the misunderstandings.\n\nSometimes, countries outside the two blocs, for example, the non-aligned countries, played a role in reducing Cold War conflicts and averting some grave crises. Jawaharlal Nehru \u2014 one of the key leaders of the NAM \u2014 played a crucial role in mediating between the two Koreas. In the Congo crisis, the UN Secretary-General played a key mediatory role. By and large, it was the realisation on a superpower\u2019s part that war by all means should be avoided that made them exercise restraint and behave more responsibly in international affairs. As the Cold War rolled from one arena to another, the logic of restraint was increasingly evident.\n\nHowever, since the Cold War did not eliminate rivalries between the two alliances, mutual suspicions led them to arm themselves to the teeth and to constantly prepare for war. Huge stocks of arms were considered necessary to prevent wars from taking place. \n\nThe two sides understood that war might occur in spite of restraint. Either side might miscalculate the number of weapons in the possession of the other side. They might misunderstand the intentions of the other side. Besides, what if there was a nuclear accident? What would happen if someone fired off a nuclear weapon by mistake or if a soldier mischievously shot off a weapon deliberately to start a war? What if an accident occurred with a nuclear weapon? How would the leaders of that country know it was an accident and not an act of sabotage by the enemy or that a missile had not landed from the other side?\n\nIn time, therefore, the US and USSR decided to collaborate in limiting or eliminating certain kinds of nuclear and non-nuclear weapons. A stable balance of weapons, they decided, could be maintained through \u2018arms control\u2019. Starting in the 1960s, the two sides signed three significant agreements within a decade. These were the Limited Test Ban Treaty, Nuclear Non\u0002Proliferation Treaty and the Anti-Ballistic Missile Treaty. Thereafter, the superpowers held several rounds of arms limitation talks and signed several more treaties to limit their arms.", "doc_id": "1eb07884-4ba4-11ed-bb30-0242ac110007"} {"source": "NCERT XII Political Science, India", "document": "They were also important for ideological reasons. The loyalty of allies suggested that the superpowers were winning the war of ideas as well, that liberal democracy and capitalism were better than socialism and communism, or vice versa.\n\nThe Cuban Missile Crisis that we began this chapter with was only one of the several crises that occurred during the Cold War. The Cold War also led to several shooting wars, but it is important to note that these crises and wars did not lead to another world war. The two superpowers were poised for direct confrontations in Korea (1950 - 53), Berlin (1958 - 62), the Congo (the early 1960s), and in several other places. Crises deepened, as neither of the parties involved was willing to back down. When we talk about arenas of the Cold War, we refer, therefore, to areas where crisis and war occurred or threatened to occur between the alliance systems but did not cross certain limits. A great many lives were lost in some of these arenas like Korea, Vietnam and Afghanistan, but the world was spared a nuclear war and global hostilities. In some cases, huge military build-ups were reported. In many cases, diplomatic communication between the superpowers could not be sustained and contributed to the misunderstandings.\n\nSometimes, countries outside the two blocs, for example, the non-aligned countries, played a role in reducing Cold War conflicts and averting some grave crises. Jawaharlal Nehru \u2014 one of the key leaders of the NAM \u2014 played a crucial role in mediating between the two Koreas. In the Congo crisis, the UN Secretary-General played a key mediatory role. By and large, it was the realisation on a superpower\u2019s part that war by all means should be avoided that made them exercise restraint and behave more responsibly in international affairs. As the Cold War rolled from one arena to another, the logic of restraint was increasingly evident.\n\nHowever, since the Cold War did not eliminate rivalries between the two alliances, mutual suspicions led them to arm themselves to the teeth and to constantly prepare for war. Huge stocks of arms were considered necessary to prevent wars from taking place. \n\nThe two sides understood that war might occur in spite of restraint. Either side might miscalculate the number of weapons in the possession of the other side. They might misunderstand the intentions of the other side. Besides, what if there was a nuclear accident? What would happen if someone fired off a nuclear weapon by mistake or if a soldier mischievously shot off a weapon deliberately to start a war? What if an accident occurred with a nuclear weapon? How would the leaders of that country know it was an accident and not an act of sabotage by the enemy or that a missile had not landed from the other side?\n\nIn time, therefore, the US and USSR decided to collaborate in limiting or eliminating certain kinds of nuclear and non-nuclear weapons. A stable balance of weapons, they decided, could be maintained through \u2018arms control\u2019. Starting in the 1960s, the two sides signed three significant agreements within a decade. These were the Limited Test Ban Treaty, Nuclear Non\u0002Proliferation Treaty and the Anti-Ballistic Missile Treaty. Thereafter, the superpowers held several rounds of arms limitation talks and signed several more treaties to limit their arms.", "doc_id": "32596ee0-4ba4-11ed-9f64-0242ac110007"} {"source": "NCERT XII Political Science, India", "document": "Gradually, the nature of non\u0002alignment changed to give greater importance to economic issues. In 1961, at the first summit in Belgrade, economic issues had not been very important. By the mid-1970s, they had become the most important issues. As a result, NAM became an economic pressure group. By the late 1980s, however, the NIEO initiative had faded, mainly because of the stiff opposition from the developed countries who acted as a united group while the non-aligned countries struggled to maintain their unity in the face of this opposition.\n\nAs a leader of NAM, India\u2019s response to the ongoing Cold War was two-fold: At one level, it took particular care in staying away from the two alliances. Second, it raised its voice against the newly decolonised countries becoming part of these alliances.\n\nIndia\u2019s policy was neither negative nor passive. As Nehru reminded the world, non\u0002alignment was not a policy of \u2018fleeing away\u2019. On the contrary, India was in favour of actively intervening in world affairs to soften Cold War rivalries. India tried to reduce the differences between the alliances and thereby prevent differences from escalating into a full-scale war. Indian diplomats and leaders were often used to communicate and mediate between Cold War rivals such as in the Korean War in the early 1950s.\n\nIt is important to remember that India chose to involve other members of the non-aligned group in this mission. During the Cold War, India repeatedly tried to activate those regional and international organisations, which were not a part of the alliances led by the US and USSR. Nehru reposed great faith in \u2018a genuine commonwealth of free and cooperating nations\u2019 that would play a positive role in softening, if not ending, the Cold War.\n\nNon-alignment was not, as some suggest, a noble international cause which had little to do with India\u2019s real interests. A non-aligned posture also served India\u2019s interests very directly, in at least two ways: \nFirst, non-alignment allowed India to take international decisions and stances that served its interests rather than the interests of the super\u0002powers and their allies.\nSecond, India was often able to balance one superpower against the other. If India felt ignored or unduly pressurised by one superpower, it could tilt towards the other. Neither alliance system could take India for granted or bully it. \n\nIndia\u2019s policy of non-alignment was criticised on a number of counts. Here we may refer to only two criticisms:\nFirst, India\u2019s non-alignment was said to be \u2018unprincipled\u2019. In the name of pursuing its national interest, India, it was said, often refused to take a firm stand on crucial international issues. \nSecond, it is suggested that India was inconsistent and took contradictory postures. Having criticised others for joining alliances, India signed the Treaty of Friendship in August 1971 with the USSR for 20 years. This was regarded, particularly by outside observers, as virtually joining the Soviet alliance system. The Indian government\u2019s view was that India needed diplomatic and possibly military support during the Bangladesh crisis and that in any case the treaty did not stop India from having good relations with other countries including the US.\n\nNon-alignment as a strategy evolved in the Cold War context. As we will see in Chapter 2, with the disintegration of the USSR and the end of the Cold War in 1991, non-alignment, both as an international movement and as the core of India\u2019s foreign policy, lost some of its earlier relevance and effectiveness. However, non\u0002alignment contained some core values and enduring ideas. It was based on a recognition that decolonised states share a historical affiliation and can become a powerful force if they come together. It meant that the poor and often very small countries of the world need not become followers of any of the big powers, that they could pursue an independent foreign policy. It was also based on a resolve to democratise the international system by thinking about an alternative world order to redress existing inequities. These core ideas remain relevant even after the Cold War has ended.", "doc_id": "189a782a-4ba7-11ed-82a7-0242ac110007"} {"source": "NCERT IX Science, India", "document": "We all know from our observation that water can exist in three states of matter\u2013\nsolid, as ice,\nliquid, as the familiar water, and\ngas, as water vapour.\n\nWhat happens inside the matter during this change of state? What happens to the particles of matter during the change of states? How does this change of state take place? We need answers to these questions, isn\u2019t it?\n\nOn increasing the temperature of solids, the kinetic energy of the particles increases. Due to the increase in kinetic energy, the particles start vibrating with greater speed. The energy supplied by heat overcomes the forces of attraction between the particles. The particles leave their fixed positions and start moving more freely. A stage is reached when the solid melts and is converted to a liquid. The minimum temperature at which a solid melts to become a liquid at the atmospheric pressure is called its melting point. The melting point of a solid is an indication of the strength of the force of attraction between its particles.\n\nThe melting point of ice is 273.15 K. The process of melting, that is, change of solid state into liquid state is also known as fusion. When a solid melts, its temperature remains the same, so where does the heat energy go?\n\nYou must have observed, during the experiment of melting, that the temperature of the system does not change after the melting point is reached, till all the ice melts. This happens even though we continue to heat the beaker, that is, we continue to supply heat. This heat gets used up in changing the state by overcoming the forces of attraction between the particles. As this heat energy is absorbed by ice without showing any rise in temperature, it is considered that it gets hidden into the contents of the beaker and is known as the latent heat. The word latent means hidden. The amount of heat energy that is required to change 1 kg of a solid into liquid at atmospheric pressure at its melting point is known as the latent heat of fusion. So, particles in water at 0 Celcius (273 K) have more energy as compared to particles in ice at the same temperature.\n\nWhen we supply heat energy to water, particles start moving even faster. At a certain temperature, a point is reached when the particles have enough energy to break free from the forces of attraction of each other. At this temperature the liquid starts changing into gas. The temperature at which a liquid starts boiling at the atmospheric pressure is known as its boiling point. Boiling is a bulk phenomenon. Particles from the bulk of the liquid gain enough energy to change into the vapour state.\n\nCan you define the latent heat of vaporisation? Do it in the same way as we have defined the latent heat of fusion. Particles in steam, that is, water vapour at 373 K (100 Celcius) have more energy than water at the same temperature. This is because particles in steam have absorbed extra energy in the form of latent heat of vaporisation. So, we infer that the state of matter can be changed into another state by changing the temperature. \n\nWe have learnt that substances around us change state from solid to liquid and from liquid to gas on application of heat. But there are some that change directly from solid state to gaseous state and vice versa without changing into the liquid state.", "doc_id": "d771f27a-4baf-11ed-aa6e-0242ac110007"} {"source": "NCERT IX Science, India", "document": "We have learnt Dalton\u2019s atomic theory in\nChapter 3, which suggested that the atom\nwas indivisible and indestructible. But the\ndiscovery of two fundamental particles\n(electrons and protons) inside the atom, led\nto the failure of this aspect of Dalton\u2019s atomic\ntheory. It was then considered necessary to\nknow how electrons and protons are arranged\nwithin an atom. For explaining this, many\nscientists proposed various atomic models.\nJ.J. Thomson was the first one to propose a\nmodel for the structure of an atom.\n\nThomson proposed the model of an atom to be similar to that of a Christmas pudding. The electrons, in a sphere of positive charge, were like currants (dry fruits) in a spherical Christmas pudding. We can also think of a watermelon, the positive charge in the atom is spread all over like the red edible part of the watermelon, while the electrons are studded in the positively charged sphere, like the seeds in the watermelon (Fig. 4.1).\n\nThomson proposed that:\n(i) An atom consists of a positively charged sphere and the electrons are embedded in it.\n(ii) The negative and positive charges are equal in magnitude. So, the atom as a whole is electrically neutral.\nAlthough Thomson\u2019s model explained that atoms are electrically neutral, the results of experiments carried out by other scientists could not be explained by this model, as we will see below.\n\nErnest Rutherford was interested in knowing how the electrons are arranged within an atom. Rutherford designed an experiment for this. In this experiment, fast moving alpha (\u03b1)-particles were made to fall on a thin gold foil.\nHe selected a gold foil because he wanted as thin a layer as possible. This gold foil was about 1000 atoms thick.\n\u03b1-particles are doubly-charged helium ions. Since they have a mass of 4 u, the fast-moving \u03b1-particles have a considerable amount of energy.\nIt was expected that \u03b1-particles would be deflected by the sub-atomic particles in the gold atoms. Since the \u03b1-particles were much heavier than the protons, he did not expect to see large deflections.\n\nBut, the \u03b1-particle scattering experiment gave totally unexpected results (Fig. 4.2). The following observations were made:\n(i) Most of the fast moving \u03b1-particles passed straight through the gold foil.\n(ii) Some of the \u03b1-particles were deflected by the foil by small angles.\n(iii) Surprisingly one out of every 12000 particles appeared to rebound.\n\nIn the words of Rutherford, \u201cThis result was almost as incredible as if you fire a 15-inch shell at a piece of tissue paper and it comes back and hits you\u201d. hear a sound when each stone strikes the wall. If he repeats this ten times, he will hear the sound ten times. But if a blind-folded child were to throw stones at a barbed-wire fence, most of the stones would not hit the fencing and no sound would be heard. This is because there are lots of gaps in the fence which allow the stone to pass through them. Following a similar reasoning, Rutherford concluded from the \u03b1-particle scattering experiment that -\n(i) Most of the space inside the atom is empty because most of the \u03b1-particles passed through the gold foil without getting deflected.\n(ii) Very few particles were deflected from their path, indicating that the positive charge of the atom occupies very little space.\n(iii) A very small fraction of \u03b1-particles were deflected by 1800,indicating that all the positive charge and mass of the gold atom were concentrated in a very small volume within the atom. \n\nFrom the data he also calculated that the radius of the nucleus is about 105 times less than the radius of the atom.\nOn the basis of his experiment, Rutherford put forward the nuclear model of an atom, which had the following features:\n(i) There is a positively charged centre in an atom called the nucleus. Nearly all the mass of an atom resides in the nucleus.\n(ii) The electrons revolve around the nucleus in circular paths.\n(iii) The size of the nucleus is very small as compared to the size of the atom.", "doc_id": "c875214e-4bb3-11ed-9ab5-0242ac110007"} {"source": "NCERT IX Science, India", "document": "We have learnt Dalton\u2019s atomic theory in\nChapter 3, which suggested that the atom\nwas indivisible and indestructible. But the\ndiscovery of two fundamental particles\n(electrons and protons) inside the atom, led\nto the failure of this aspect of Dalton\u2019s atomic\ntheory. It was then considered necessary to\nknow how electrons and protons are arranged\nwithin an atom. For explaining this, many\nscientists proposed various atomic models.\nJ.J. Thomson was the first one to propose a\nmodel for the structure of an atom.\n\nThomson proposed the model of an atom to be similar to that of a Christmas pudding. The electrons, in a sphere of positive charge, were like currants (dry fruits) in a spherical Christmas pudding. We can also think of a watermelon, the positive charge in the atom is spread all over like the red edible part of the watermelon, while the electrons are studded in the positively charged sphere, like the seeds in the watermelon (Fig. 4.1).\n\nThomson proposed that:\n(i) An atom consists of a positively charged sphere and the electrons are embedded in it.\n(ii) The negative and positive charges are equal in magnitude. So, the atom as a whole is electrically neutral.\nAlthough Thomson\u2019s model explained that atoms are electrically neutral, the results of experiments carried out by other scientists could not be explained by this model, as we will see below.\n\nErnest Rutherford was interested in knowing how the electrons are arranged within an atom. Rutherford designed an experiment for this. In this experiment, fast moving alpha (\u03b1)-particles were made to fall on a thin gold foil.\nHe selected a gold foil because he wanted as thin a layer as possible. This gold foil was about 1000 atoms thick.\n\u03b1-particles are doubly-charged helium ions. Since they have a mass of 4 u, the fast-moving \u03b1-particles have a considerable amount of energy.\nIt was expected that \u03b1-particles would be deflected by the sub-atomic particles in the gold atoms. Since the \u03b1-particles were much heavier than the protons, he did not expect to see large deflections.\n\nBut, the \u03b1-particle scattering experiment gave totally unexpected results (Fig. 4.2). The following observations were made:\n(i) Most of the fast moving \u03b1-particles passed straight through the gold foil.\n(ii) Some of the \u03b1-particles were deflected by the foil by small angles.\n(iii) Surprisingly one out of every 12000 particles appeared to rebound.\n\nIn the words of Rutherford, \u201cThis result was almost as incredible as if you fire a 15-inch shell at a piece of tissue paper and it comes back and hits you\u201d. hear a sound when each stone strikes the wall. If he repeats this ten times, he will hear the sound ten times. But if a blind-folded child were to throw stones at a barbed-wire fence, most of the stones would not hit the fencing and no sound would be heard. This is because there are lots of gaps in the fence which allow the stone to pass through them. Following a similar reasoning, Rutherford concluded from the \u03b1-particle scattering experiment that -\n(i) Most of the space inside the atom is empty because most of the \u03b1-particles passed through the gold foil without getting deflected.\n(ii) Very few particles were deflected from their path, indicating that the positive charge of the atom occupies very little space.\n(iii) A very small fraction of \u03b1-particles were deflected by 1800,indicating that all the positive charge and mass of the gold atom were concentrated in a very small volume within the atom. \n\nFrom the data he also calculated that the radius of the nucleus is about 105 times less than the radius of the atom.\nOn the basis of his experiment, Rutherford put forward the nuclear model of an atom, which had the following features:\n(i) There is a positively charged centre in an atom called the nucleus. Nearly all the mass of an atom resides in the nucleus.\n(ii) The electrons revolve around the nucleus in circular paths.\n(iii) The size of the nucleus is very small as compared to the size of the atom.", "doc_id": "e4f9584e-4bb3-11ed-8029-0242ac110007"} {"source": "NCERT IX Science, India", "document": "We have learnt Dalton\u2019s atomic theory in\nChapter 3, which suggested that the atom\nwas indivisible and indestructible. But the\ndiscovery of two fundamental particles\n(electrons and protons) inside the atom, led\nto the failure of this aspect of Dalton\u2019s atomic\ntheory. It was then considered necessary to\nknow how electrons and protons are arranged\nwithin an atom. For explaining this, many\nscientists proposed various atomic models.\nJ.J. Thomson was the first one to propose a\nmodel for the structure of an atom.\n\nThomson proposed the model of an atom to be similar to that of a Christmas pudding. The electrons, in a sphere of positive charge, were like currants (dry fruits) in a spherical Christmas pudding. We can also think of a watermelon, the positive charge in the atom is spread all over like the red edible part of the watermelon, while the electrons are studded in the positively charged sphere, like the seeds in the watermelon (Fig. 4.1).\n\nThomson proposed that:\n(i) An atom consists of a positively charged sphere and the electrons are embedded in it.\n(ii) The negative and positive charges are equal in magnitude. So, the atom as a whole is electrically neutral.\nAlthough Thomson\u2019s model explained that atoms are electrically neutral, the results of experiments carried out by other scientists could not be explained by this model, as we will see below.\n\nErnest Rutherford was interested in knowing how the electrons are arranged within an atom. Rutherford designed an experiment for this. In this experiment, fast moving alpha (\u03b1)-particles were made to fall on a thin gold foil.\nHe selected a gold foil because he wanted as thin a layer as possible. This gold foil was about 1000 atoms thick.\n\u03b1-particles are doubly-charged helium ions. Since they have a mass of 4 u, the fast-moving \u03b1-particles have a considerable amount of energy.\nIt was expected that \u03b1-particles would be deflected by the sub-atomic particles in the gold atoms. Since the \u03b1-particles were much heavier than the protons, he did not expect to see large deflections.\n\nBut, the \u03b1-particle scattering experiment gave totally unexpected results (Fig. 4.2). The following observations were made:\n(i) Most of the fast moving \u03b1-particles passed straight through the gold foil.\n(ii) Some of the \u03b1-particles were deflected by the foil by small angles.\n(iii) Surprisingly one out of every 12000 particles appeared to rebound.\n\nIn the words of Rutherford, \u201cThis result was almost as incredible as if you fire a 15-inch shell at a piece of tissue paper and it comes back and hits you\u201d. hear a sound when each stone strikes the wall. If he repeats this ten times, he will hear the sound ten times. But if a blind-folded child were to throw stones at a barbed-wire fence, most of the stones would not hit the fencing and no sound would be heard. This is because there are lots of gaps in the fence which allow the stone to pass through them. Following a similar reasoning, Rutherford concluded from the \u03b1-particle scattering experiment that -\n(i) Most of the space inside the atom is empty because most of the \u03b1-particles passed through the gold foil without getting deflected.\n(ii) Very few particles were deflected from their path, indicating that the positive charge of the atom occupies very little space.\n(iii) A very small fraction of \u03b1-particles were deflected by 1800,indicating that all the positive charge and mass of the gold atom were concentrated in a very small volume within the atom. \n\nFrom the data he also calculated that the radius of the nucleus is about 105 times less than the radius of the atom.\nOn the basis of his experiment, Rutherford put forward the nuclear model of an atom, which had the following features:\n(i) There is a positively charged centre in an atom called the nucleus. Nearly all the mass of an atom resides in the nucleus.\n(ii) The electrons revolve around the nucleus in circular paths.\n(iii) The size of the nucleus is very small as compared to the size of the atom.", "doc_id": "f44dd748-4bb3-11ed-911d-0242ac110007"} {"source": "NCERT IX Science, India", "document": "Unicellular freshwater organisms and most plant cells tend to gain water through osmosis. Absorption of water by plant roots is also an example of osmosis. Thus, diffusion is important in exhange of gases and water in the life of a cell. In additions to this, the cell also obtains nutrition from its environment. Different molecules move in and out of the cell through a type of transport requiring use of energy. The plasma membrane is flexible and is made up of organic molecules called lipids and proteins. However, we can observe the structure of the plasma membrane only through an electron microscope. The flexibility of the cell membrane also enables the cell to engulf in food and other material from its external environment. Such processes are known as endocytosis. Amoeba acquires its food through such processes.\n\nPlant cells, in addition to the plasma membrane, have another rigid outer covering called the cell wall. The cell wall lies outside the plasma membrane. The plant cell wall is mainly composed of cellulose. Cellulose is a complex substance and provides structural strength to plants.\n\nWhen a living plant cell loses water through osmosis there is shrinkage or contraction of the contents of the cell away from the cell wall. This phenomenon is known as plasmolysis. We can observe this phenomenon by performing the following activity: What do we infer from this activity? It appears that only living cells, and not dead cells, are able to absorb water by osmosis. Cell walls permit the cells of plants, fungi and bacteria to withstand very dilute (hypotonic) external media without bursting. In such media the cells tend to take up water by osmosis. The cell swells, building up pressure against the cell wall. The wall exerts an equal pressure against the swollen cell. Because of their walls, such cells can withstand much greater changes in the surrounding medium than animal cells.\n\nRemember the temporary mount of onion peel we prepared? We had put iodine solution on the peel. Why? What would we see if we tried observing the peel without putting the iodine solution? Try it and see what the difference is. Further, when we put iodine solution on the peel, did each cell get evenly coloured? According to their chemical composition different regions of cells get coloured differentially. Some regions appear darker than other regions. Apart from iodine solution we could also use safranin solution or methylene blue solution to stain the cells. We have observed cells from an onion; let us now observe cells from our own body.\n\nThe nucleus has a double layered covering called nuclear membrane. The nuclear membrane has pores which allow the transfer of material from inside the nucleus to its outside, that is, to the cytoplasm. The nucleus contains chromosomes, which are visible as rod-shaped structures only when the cell is about to divide.\n\nChromosomes contain information for inheritance of characters from parents to next generation in the form of DNA (Deoxyribo Nucleic Acid) molecules. Chromosomes are composed of DNA and protein. DNA molecules contain the information necessary for constructing and organising cells. Functional segments of DNA are called genes. In a cell which is not dividing, this DNA is present as part of chromatin material. Chromatin material is visible as entangled mass of thread like structures. Whenever the cell is about to divide, the chromatin material gets organised into chromosomes.\n\nThe nucleus plays a central role in cellular reproduction, the process by which a single cell divides and forms two new cells. It also plays a crucial part, along with the environment, in determining the way the cell will develop and what form it will exhibit at maturity, by directing the chemical activities of the cell.\n\nIn some organisms like bacteria, the nuclear region of the cell may be poorly defined due to the absence of a nuclear membrane. Such an undefined nuclear region containing only nucleic acids is called a nucleoid. Such organisms, whose cells lack a nuclear membrane, are called prokaryotes (Pro = primitive or primary; karyote \u2248 karyon = nucleus). Organisms with cells having a nuclear membrane are called eukaryotes. Prokaryotic cells (see Fig. 5.4) also lack most of the other cytoplasmic organelles present in eukaryotic cells. Many of the functions of such organelles are also performed by poorly organised parts of the cytoplasm. The chlorophyll in photosynthetic prokaryotic bacteria is associated with membranous vesicles (bag like structures) but not with plastids as in eukaryotic cells.\n\nWhen we look at the temporary mounts of onion peel as well as human cheek cells, we can see a large region of each cell enclosed by the cell membrane. This region takes up very little stain. It is called the cytoplasm. The cytoplasm is the fluid content inside the plasma membrane. It also contains many specialised cell organelles. Each of these organelles performs a specific function for the cell.\n\nCell organelles are enclosed by membranes. In prokaryotes, beside the absence of a defined nuclear region, the membrane-bound cell organelles are also absent. On the other hand, the eukaryotic cells have nuclear membrane as well as membrane-enclosed organelles.\n\nThe significance of membranes can be illustrated with the example of viruses. Viruses lack any membranes and hence do not show characteristics of life until they enter a living body and use its cell machinery to multiply.", "doc_id": "0c051ace-4bbc-11ed-ae6e-0242ac110007"} {"source": "NCERT IX Science, India", "document": "Newton further studied Galileo\u2019s ideas on force and motion and presented three fundamental laws that govern the motion of objects. These three laws are known as Newton\u2019s laws of motion. The first law of motion is stated as: An object remains in a state of rest or of uniform motion in a straight line unless compelled to change that state by an applied force.\n\nIn other words, all objects resist a change in their state of motion. In a qualitative way, the tendency of undisturbed objects to stay at rest or to keep moving with the same velocity is called inertia. This is why, the first law of motion is also known as the law of inertia.\n\nCertain experiences that we come across while travelling in a motorcar can be explained on the basis of the law of inertia. We tend to remain at rest with respect to the seat until the driver applies a braking force to stop the motorcar. With the application of brakes, the car slows down but our body tends to continue in the same state of motion because of its inertia. A sudden application of brakes may thus cause injury to us by impact or collision with the panels in front. Safety belts are worn to prevent such accidents. Safety belts exert a force on our body to make the forward motion slower. An opposite experience is encountered when we are standing in a bus and the bus begins to move suddenly. Now we tend to fall backwards. This is because the sudden start of the bus brings motion to the bus as well as to our feet in contact with the floor of the bus. But the rest of our body opposes this motion because of its inertia. \n\nWhen a motorcar makes a sharp turn at a high speed, we tend to get thrown to one side. This can again be explained on the basis of the law of inertia. We tend to continue in our straight-line motion. When an unbalanced force is applied by the engine to change the direction of motion of the motorcar, we slip to one side of the seat due to the inertia of our body.\n\nAll the examples and activities given so far illustrate that there is a resistance offered by an object to change its state of motion. If it is at rest it tends to remain at rest; if it is moving it tends to keep moving. This property of an object is called its inertia. Do all bodies have the same inertia? We know that it is easier to push an empty box than a box full of books. Similarly, if we kick a football it flies away. But if we kick a stone of the same size with equal force, it hardly moves. We may, in fact, get an injury in our foot while doing so! Similarly, in activity 9.2, instead of a five-rupees coin if we use a one-rupee coin, we find that a lesser force is required to perform the activity. A force that is just enough to cause a small cart to pick up a large velocity will produce a negligible change in the motion of a train. This is because, in comparison to the cart the train has a much lesser tendency to change its state of motion. Accordingly, we say that the train has more inertia than the cart. Clearly, heavier or more massive objects offer larger inertia. Quantitatively, the inertia of an object is measured by its mass. We may thus relate inertia and mass as follows: Inertia is the natural tendency of an object to resist a change in its state of motion or of rest. The mass of an object is a measure of its inertia.\n\nThe first two laws of motion tell us how an applied force changes the motion and provide us with a method of determining the force. The third law of motion states that when one object exerts a force on another object, the second object instantaneously exerts a force back on the first. These two forces are always equal in magnitude but opposite in direction. These forces act on different objects and never on the same object. In the game of football sometimes we, while looking at the football and trying to kick it with a greater force, collide with a player of the opposite team. Both feel hurt because each applies a force to the other. In other words, there is a pair of forces and not just one force. The two opposing forces are also known as action and reaction forces.\n\nLet us consider two spring balances connected together as shown in Fig. 9.10. The fixed end of balance B is attached with a rigid support, like a wall. When a force is applied through the free end of spring balance A, it is observed that both the spring balances show the same readings on their scales. It means that the force exerted by spring balance A on balance B is equal but opposite in direction to the force exerted by the balance B on balance A. Any of these two forces can be called as action and the other as reaction. This gives us an alternative statement of the third law of motion i.e., to every action there is an equal and opposite reaction. However, it must be remembered that the action and reaction always act on two different objects, simultaneously.\n\nSuppose you are standing at rest and intend to start walking on a road. You must accelerate, and this requires a force in accordance with the second law of motion. Which is this force? Is it the muscular effort you exert on the road? Is it in the direction we intend to move? No, you push the road below backwards. The road exerts an equal and opposite force on your feet to make you move forward. It is important to note that even though the action and reaction forces are always equal in magnitude, these forces may not produce accelerations of equal magnitudes. This is because each force acts on a different object that may have a different mass. \n\nWhen a gun is fired, it exerts a forward force on the bullet. The bullet exerts an equal and opposite force on the gun. This results in the recoil of the gun (Fig. 9.11). Since the gun has a much greater mass than the bullet, the acceleration of the gun is much less than the acceleration of the bullet. The third law of motion can also be illustrated when a sailor jumps out of a rowing boat. As the sailor jumps forward, the force on the boat moves it backwards (Fig. 9.12).", "doc_id": "75586008-4bc1-11ed-9557-0242ac110007"} {"source": "NCERT IX Science, India", "document": "Our planet, Earth is the only one on which life, as we know it, exists. Life on Earth is dependent on many factors. Most life-forms we know need an ambient temperature, water, and food. The resources available on the Earth and the energy from the Sun are necessary to meet the basic requirements of all life-forms on the Earth.\n\nThese are the land, the water and the air. The outer crust of the Earth is called the lithosphere. Water covers 75% of the Earth\u2019s surface. It is also found underground. These comprise the hydrosphere. The air that covers the whole of the Earth like a blanket, is called the atmosphere. \n\nLiving things are found where these three exist. This life-supporting zone of the Earth where the atmosphere, the hydrosphere and the lithosphere interact and make life possible, is known as the biosphere. Living things constitute the biotic component of the biosphere. The air, the water and the soil form the non-living or abiotic component of the biosphere. Let us study these abiotic components in detail in order to understand their role in sustaining life on Earth.\n\nWe have already talked about the composition of air in the first chapter. It is a mixture of many gases like nitrogen, oxygen, carbon dioxide and water vapour. It is interesting to note that even the composition of air is the result of life on Earth. In planets such as Venus and Mars, where no life is known to exist, the major component of the atmosphere is found to be carbon dioxide. In fact, carbon dioxide constitutes up to 95-97% of the atmosphere on Venus and Mars. \n\nEukaryotic cells and many prokaryotic cells, discussed in Chapter 5, need oxygen to break down glucose molecules and get energy for their activities. This results in the production of carbon dioxide. Another process which results in the consumption of oxygen and the concomitant production of carbon dioxide is combustion. This includes not just human activities, which burn fuels to get energy, but also forest fires. \n\nDespite this, the percentage of carbon dioxide in our atmosphere is a mere fraction of a percent because carbon dioxide is \u2018fixed\u2019 in two ways: (i) Green plants convert carbon dioxide into glucose in the presence of Sunlight and (ii) many marine animals use carbonates dissolved in sea-water to make their shells.\n\nWe have talked of the atmosphere covering the earth, like a blanket. We know that air is a bad conductor of heat. The atmosphere keeps the average temperature of the Earth fairly steady during the day and even during the course of the whole year. The atmosphere prevents the sudden increase in temperature during the daylight hours. And during the night, it slows down the escape of heat into outer space. Think of the moon, which is about the same distance from the Sun that the Earth is. Despite that, on the surface of the moon, with no atmosphere, the temperature ranges from \u2013190\u00b0 C to 110\u00b0 C.\n\nWe have all felt the relief brought by cool evening breezes after a hot day. And sometimes, we are lucky enough to get rains after some days of really hot weather. What causes the movement of air, and what decides whether this movement will be in the form of a gentle breeze, a strong wind or a terrible storm? What brings us the welcome rains? All these phenomena are the result of changes that take place in our atmosphere due to the heating of air and the formation of water vapour. Water vapour is formed due to the heating of water bodies and the activities of living organisms. The atmosphere can be heated from below by the radiation that is reflected back or re-radiated by the land or water bodies. On being heated, convection currents are set up in the air.\n\nThe patterns revealed by the smoke show us the directions in which hot and cold air move. In a similar manner, when air is heated by radiation from the heated land or water, it rises. But since land gets heated faster than water, the air over land would also be heated faster than the air over water bodies. \n\nSo, if we look at the situation in coastal regions during the day, the air above the land gets heated faster and starts rising. As this air rises, a region of low pressure is created and air over the sea moves into this area of low pressure. The movement of air from one region to the other creates winds. During the day, the direction of the wind would be from the sea to the land. At night, both land and sea start to cool. Since water cools down slower than the land, the air above water would be warmer than the air above land.\n\nSimilarly, all the movements of air resulting in diverse atmospheric phenomena are caused by the uneven heating of the atmosphere in different regions of the Earth. But various other factors also influence these winds \u2013 the rotation of the Earth and the presence of mountain ranges in the paths of the wind are a couple of these factors. We will not go into these factors in detail in this chapter, but think about this: how do the presence of the Himalayas change the flow of a wind blowing from Allahabad towards the north?", "doc_id": "178ae412-4ddf-11ed-8037-0242ac110007"} {"source": "NCERT IX Science, India", "document": "Our planet, Earth is the only one on which life, as we know it, exists. Life on Earth is dependent on many factors. Most life-forms we know need an ambient temperature, water, and food. The resources available on the Earth and the energy from the Sun are necessary to meet the basic requirements of all life-forms on the Earth.\n\nThese are the land, the water and the air. The outer crust of the Earth is called the lithosphere. Water covers 75% of the Earth\u2019s surface. It is also found underground. These comprise the hydrosphere. The air that covers the whole of the Earth like a blanket, is called the atmosphere. \n\nLiving things are found where these three exist. This life-supporting zone of the Earth where the atmosphere, the hydrosphere and the lithosphere interact and make life possible, is known as the biosphere. Living things constitute the biotic component of the biosphere. The air, the water and the soil form the non-living or abiotic component of the biosphere. Let us study these abiotic components in detail in order to understand their role in sustaining life on Earth.\n\nWe have already talked about the composition of air in the first chapter. It is a mixture of many gases like nitrogen, oxygen, carbon dioxide and water vapour. It is interesting to note that even the composition of air is the result of life on Earth. In planets such as Venus and Mars, where no life is known to exist, the major component of the atmosphere is found to be carbon dioxide. In fact, carbon dioxide constitutes up to 95-97% of the atmosphere on Venus and Mars. \n\nEukaryotic cells and many prokaryotic cells, discussed in Chapter 5, need oxygen to break down glucose molecules and get energy for their activities. This results in the production of carbon dioxide. Another process which results in the consumption of oxygen and the concomitant production of carbon dioxide is combustion. This includes not just human activities, which burn fuels to get energy, but also forest fires. \n\nDespite this, the percentage of carbon dioxide in our atmosphere is a mere fraction of a percent because carbon dioxide is \u2018fixed\u2019 in two ways: (i) Green plants convert carbon dioxide into glucose in the presence of Sunlight and (ii) many marine animals use carbonates dissolved in sea-water to make their shells.\n\nWe have talked of the atmosphere covering the earth, like a blanket. We know that air is a bad conductor of heat. The atmosphere keeps the average temperature of the Earth fairly steady during the day and even during the course of the whole year. The atmosphere prevents the sudden increase in temperature during the daylight hours. And during the night, it slows down the escape of heat into outer space. Think of the moon, which is about the same distance from the Sun that the Earth is. Despite that, on the surface of the moon, with no atmosphere, the temperature ranges from \u2013190\u00b0 C to 110\u00b0 C.\n\nWe have all felt the relief brought by cool evening breezes after a hot day. And sometimes, we are lucky enough to get rains after some days of really hot weather. What causes the movement of air, and what decides whether this movement will be in the form of a gentle breeze, a strong wind or a terrible storm? What brings us the welcome rains? All these phenomena are the result of changes that take place in our atmosphere due to the heating of air and the formation of water vapour. Water vapour is formed due to the heating of water bodies and the activities of living organisms. The atmosphere can be heated from below by the radiation that is reflected back or re-radiated by the land or water bodies. On being heated, convection currents are set up in the air.\n\nThe patterns revealed by the smoke show us the directions in which hot and cold air move. In a similar manner, when air is heated by radiation from the heated land or water, it rises. But since land gets heated faster than water, the air over land would also be heated faster than the air over water bodies. \n\nSo, if we look at the situation in coastal regions during the day, the air above the land gets heated faster and starts rising. As this air rises, a region of low pressure is created and air over the sea moves into this area of low pressure. The movement of air from one region to the other creates winds. During the day, the direction of the wind would be from the sea to the land. At night, both land and sea start to cool. Since water cools down slower than the land, the air above water would be warmer than the air above land.\n\nSimilarly, all the movements of air resulting in diverse atmospheric phenomena are caused by the uneven heating of the atmosphere in different regions of the Earth. But various other factors also influence these winds \u2013 the rotation of the Earth and the presence of mountain ranges in the paths of the wind are a couple of these factors. We will not go into these factors in detail in this chapter, but think about this: how do the presence of the Himalayas change the flow of a wind blowing from Allahabad towards the north?", "doc_id": "2d27af1c-4ddf-11ed-b760-0242ac110007"} {"source": "NCERT IX Science, India", "document": "We have talked of the atmosphere covering the earth, like a blanket. We know that air is a bad conductor of heat. The atmosphere keeps the average temperature of the Earth fairly steady during the day and even during the course of the whole year. The atmosphere prevents the sudden increase in temperature during the daylight hours. And during the night, it slows down the escape of heat into outer space. Think of the moon, which is about the same distance from the Sun that the Earth is. Despite that, on the surface of the moon, with no atmosphere, the temperature ranges from \u2013190\u00b0 C to 110\u00b0 C.\n\nWe have all felt the relief brought by cool evening breezes after a hot day. And sometimes, we are lucky enough to get rains after some days of really hot weather. What causes the movement of air, and what decides whether this movement will be in the form of a gentle breeze, a strong wind or a terrible storm? What brings us the welcome rains? All these phenomena are the result of changes that take place in our atmosphere due to the heating of air and the formation of water vapour. Water vapour is formed due to the heating of water bodies and the activities of living organisms. The atmosphere can be heated from below by the radiation that is reflected back or re-radiated by the land or water bodies. On being heated, convection currents are set up in the air.\n\nThe patterns revealed by the smoke show us the directions in which hot and cold air move. In a similar manner, when air is heated by radiation from the heated land or water, it rises. But since land gets heated faster than water, the air over land would also be heated faster than the air over water bodies. \n\nSo, if we look at the situation in coastal regions during the day, the air above the land gets heated faster and starts rising. As this air rises, a region of low pressure is created and air over the sea moves into this area of low pressure. The movement of air from one region to the other creates winds. During the day, the direction of the wind would be from the sea to the land. At night, both land and sea start to cool. Since water cools down slower than the land, the air above water would be warmer than the air above land.\n\nSimilarly, all the movements of air resulting in diverse atmospheric phenomena are caused by the uneven heating of the atmosphere in different regions of the Earth. But various other factors also influence these winds \u2013 the rotation of the Earth and the presence of mountain ranges in the paths of the wind are a couple of these factors. We will not go into these factors in detail in this chapter, but think about this: how do the presence of the Himalayas change the flow of a wind blowing from Allahabad towards the north?\n\nLet us go back now to the question of how clouds are formed and bring us rain. We could start by doing a simple experiment which demonstrates some of the factors influencing these climatic changes.\n\nThe above experiment replicates, on a very small scale, what happens when air with a very high content of water vapour goes from a region of high pressure to a region of low pressure or vice versa.\n\nWhen water bodies are heated during the day, a large amount of water evaporates and goes into the air. Some amount of water vapour also get into the atmosphere because of various biological activities. This air also gets heated. The hot air rises up carrying the water vapour with it. As the air rises, it expands and cools. This cooling causes the water vapour in the air to condense in the form of tiny droplets. This condensation of water is facilitated if some particles could act as the \u2018nucleus\u2019 for these drops to form around. Normally dust and other suspended particles in the air perform this function.\n\nOnce the water droplets are formed, they grow bigger by the \u2018condensation\u2019 of these water droplets. When the drops have grown big and heavy, they fall down in the form of rain. Sometimes, when the temperature of air is low enough, precipitation may occur in the form of snow, sleet or hail.\n\nRainfall patterns are decided by the prevailing wind patterns. In large parts of India, rains are mostly brought by the south\u0002west or north-east monsoons. We have also heard weather reports that say \u2018depressions\u2019 in the Bay of Bengal have caused rains in some areas.\n\nWe keep hearing of the increasing levels of oxides of nitrogen and sulphur in the news. People often bemoan the fact that the quality of air has gone down since their childhood. How is the quality of air affected and how does this change in quality affect us and other life forms?\n\nThe fossil fuels like coal and petroleum contain small amounts of nitrogen and sulphur. When these fuels are burnt, nitrogen and sulphur too are burnt and this produces different oxides of nitrogen and sulphur. Not only is the inhalation of these gases dangerous, they also dissolve in rain to give rise to acid rain. The combustion of fossil fuels also increases the amount of suspended particles in air. These suspended particles could be unburnt carbon particles or substances called hydrocarbons. Presence of high levels of all these pollutants cause visibility to be lowered, especially in cold weather when water also condenses out of air. This is known as smog and is a visible indication of air pollution. Studies have shown that regularly breathing air that contains any of these substances increases the incidence of allergies, cancer and heart diseases. An increase in the content of these harmful substances in air is called air pollution.", "doc_id": "93c96656-4de0-11ed-b600-0242ac110007"} {"source": "NCERT IX Science, India", "document": "We have talked of the atmosphere covering the earth, like a blanket. We know that air is a bad conductor of heat. The atmosphere keeps the average temperature of the Earth fairly steady during the day and even during the course of the whole year. The atmosphere prevents the sudden increase in temperature during the daylight hours. And during the night, it slows down the escape of heat into outer space. Think of the moon, which is about the same distance from the Sun that the Earth is. Despite that, on the surface of the moon, with no atmosphere, the temperature ranges from \u2013190\u00b0 C to 110\u00b0 C.\n\nWe have all felt the relief brought by cool evening breezes after a hot day. And sometimes, we are lucky enough to get rains after some days of really hot weather. What causes the movement of air, and what decides whether this movement will be in the form of a gentle breeze, a strong wind or a terrible storm? What brings us the welcome rains? All these phenomena are the result of changes that take place in our atmosphere due to the heating of air and the formation of water vapour. Water vapour is formed due to the heating of water bodies and the activities of living organisms. The atmosphere can be heated from below by the radiation that is reflected back or re-radiated by the land or water bodies. On being heated, convection currents are set up in the air.\n\nThe patterns revealed by the smoke show us the directions in which hot and cold air move. In a similar manner, when air is heated by radiation from the heated land or water, it rises. But since land gets heated faster than water, the air over land would also be heated faster than the air over water bodies. \n\nSo, if we look at the situation in coastal regions during the day, the air above the land gets heated faster and starts rising. As this air rises, a region of low pressure is created and air over the sea moves into this area of low pressure. The movement of air from one region to the other creates winds. During the day, the direction of the wind would be from the sea to the land. At night, both land and sea start to cool. Since water cools down slower than the land, the air above water would be warmer than the air above land.\n\nSimilarly, all the movements of air resulting in diverse atmospheric phenomena are caused by the uneven heating of the atmosphere in different regions of the Earth. But various other factors also influence these winds \u2013 the rotation of the Earth and the presence of mountain ranges in the paths of the wind are a couple of these factors. We will not go into these factors in detail in this chapter, but think about this: how do the presence of the Himalayas change the flow of a wind blowing from Allahabad towards the north?\n\nLet us go back now to the question of how clouds are formed and bring us rain. We could start by doing a simple experiment which demonstrates some of the factors influencing these climatic changes.\n\nThe above experiment replicates, on a very small scale, what happens when air with a very high content of water vapour goes from a region of high pressure to a region of low pressure or vice versa.\n\nWhen water bodies are heated during the day, a large amount of water evaporates and goes into the air. Some amount of water vapour also get into the atmosphere because of various biological activities. This air also gets heated. The hot air rises up carrying the water vapour with it. As the air rises, it expands and cools. This cooling causes the water vapour in the air to condense in the form of tiny droplets. This condensation of water is facilitated if some particles could act as the \u2018nucleus\u2019 for these drops to form around. Normally dust and other suspended particles in the air perform this function.\n\nOnce the water droplets are formed, they grow bigger by the \u2018condensation\u2019 of these water droplets. When the drops have grown big and heavy, they fall down in the form of rain. Sometimes, when the temperature of air is low enough, precipitation may occur in the form of snow, sleet or hail.\n\nRainfall patterns are decided by the prevailing wind patterns. In large parts of India, rains are mostly brought by the south\u0002west or north-east monsoons. We have also heard weather reports that say \u2018depressions\u2019 in the Bay of Bengal have caused rains in some areas.\n\nWe keep hearing of the increasing levels of oxides of nitrogen and sulphur in the news. People often bemoan the fact that the quality of air has gone down since their childhood. How is the quality of air affected and how does this change in quality affect us and other life forms?\n\nThe fossil fuels like coal and petroleum contain small amounts of nitrogen and sulphur. When these fuels are burnt, nitrogen and sulphur too are burnt and this produces different oxides of nitrogen and sulphur. Not only is the inhalation of these gases dangerous, they also dissolve in rain to give rise to acid rain. The combustion of fossil fuels also increases the amount of suspended particles in air. These suspended particles could be unburnt carbon particles or substances called hydrocarbons. Presence of high levels of all these pollutants cause visibility to be lowered, especially in cold weather when water also condenses out of air. This is known as smog and is a visible indication of air pollution. Studies have shown that regularly breathing air that contains any of these substances increases the incidence of allergies, cancer and heart diseases. An increase in the content of these harmful substances in air is called air pollution.", "doc_id": "e7294f6e-4de0-11ed-bf5d-0242ac110007"} {"source": "NCERT IX Science, India", "document": "Rainfall patterns are decided by the prevailing wind patterns. In large parts of India, rains are mostly brought by the south\u0002west or north-east monsoons. We have also heard weather reports that say \u2018depressions\u2019 in the Bay of Bengal have caused rains in some areas.\n\nWe keep hearing of the increasing levels of oxides of nitrogen and sulphur in the news. People often bemoan the fact that the quality of air has gone down since their childhood. How is the quality of air affected and how does this change in quality affect us and other life forms?\n\nThe fossil fuels like coal and petroleum contain small amounts of nitrogen and sulphur. When these fuels are burnt, nitrogen and sulphur too are burnt and this produces different oxides of nitrogen and sulphur. Not only is the inhalation of these gases dangerous, they also dissolve in rain to give rise to acid rain. The combustion of fossil fuels also increases the amount of suspended particles in air. These suspended particles could be unburnt carbon particles or substances called hydrocarbons. Presence of high levels of all these pollutants cause visibility to be lowered, especially in cold weather when water also condenses out of air. This is known as smog and is a visible indication of air pollution. Studies have shown that regularly breathing air that contains any of these substances increases the incidence of allergies, cancer and heart diseases. An increase in the content of these harmful substances in air is called air pollution.\n\nWater occupies a very large area of the Earth\u2019s surface and is also found underground. Some amount of water exists in the form of water vapour in the atmosphere. Most of the water on Earth\u2019s surface is found in seas and oceans and is saline. Fresh water is found frozen in the ice-caps at the two poles and on snow\u0002covered mountains. The underground water and the water in rivers, lakes and ponds is also fresh. However, the availability of fresh water varies from place to place. Practically every summer, most places have to face a shortage of water. And in rural areas, where water supply systems have not been installed, people are forced to spend considerable amounts of time in fetching water from far\u0002away sources.\n\nWhy do organisms require water? All cellular processes take place in a water medium. All the reactions that take place within our body and within the cells occur between substances that are dissolved in water. Substances are also transported from one part of the body to the other in a dissolved form. Hence, organisms need to maintain the level of water within their bodies in order to stay alive. Terrestrial life-forms require fresh water for this because their bodies cannot tolerate or get rid of the high amounts of dissolved salts in saline water. Thus, water sources need to be easily accessible for animals and plants to survive on land.\n\nAfter compiling the results of the above two activities, think if there is any relationship between the amount of available water and the number and variety of plants and animals that can live in a given area. If there is a relationship, where do you think you would find a greater variety and abundance of life \u2013 in a region that receives 5 cm of rainfall in a year or a region that receives 200 cm of rainfall in a year? Find the map showing rainfall patterns in the atlas and predict which States in India would have the maximum biodiversity and which would have the least. Can we think of any way of checking whether the prediction is correct?\n\nThe availability of water decides not only the number of individuals of each species that are able to survive in a particular area, but it also decides the diversity of life there. Of course, the availability of water is not the only factor that decides the sustainability of life in a region. Other factors like the temperature and nature of soil also matter. But water is one of the major resources which determine life on land.\n\nWater dissolves the fertilisers and pesticides that we use on our farms. So some percentage of these substances are washed into the water bodies. Sewage from our towns and cities and the waste from factories are also dumped into rivers or lakes. Specific industries also use water for cooling in various operations and later return this hot water to water-bodies. Another manner in which the temperature of the water in rivers can be affected is when water is released from dams. The water inside the deep reservoir would be colder than the water at the surface which gets heated by the Sun. \n\nAll this can affect the life-forms that are found in these water bodies in various ways. It can encourage the growth of some life-forms and harm some other life-forms. This affects the balance between various organisms which had been established in that system. So we use the term water-pollution to cover the following effects:\n1. The addition of undesirable substances to water -bodies. These substances could be the fertilisers and pesticides used in farming or they could be poisonous substances, like mercury salts which are used by paper-industries. These could also be disease-causing organisms, like the bacteria which cause cholera.\n2. The removal of desirable substances from water-bodies. Dissolved oxygen is used by the animals and plants that live in water. Any change that reduces the amount of this dissolved oxygen would adversely affect these aquatic organisms. Other nutrients could also be depleted from the water bodies.\n3. A change in temperature. Aquatic organisms are used to a certain range of temperature in the water -body where they live, and a sudden marked change in this temperature would be dangerous for them or affect their breeding. The eggs and larvae of various animals are particularly susceptible to temperature changes.", "doc_id": "81bd9436-4de1-11ed-9aab-0242ac110007"} {"source": "NCERT IX Science, India", "document": "Soil is an important resource that decides the diversity of life in an area. But what is the soil and how is it formed? The outermost layer of our Earth is called the crust and the minerals found in this layer supply a variety of nutrients to life-forms. But these minerals will not be available to the organisms if the minerals are bound up in huge rocks. Over long periods of time, thousands and millions of years, the rocks at or near the surface of the Earth are broken down by various physical, chemical and some biological processes. The end product of this breaking down is the fine particles of soil. But what are the factors or processes that make soil?\nThe Sun: The Sun heats up rocks during the day so that they expand. At night, these rocks cool down and contract. Since all parts of the rock do not expand and contract at the same rate, this results in the formation of cracks and ultimately the huge rocks break up into smaller pieces.\nWater: Water helps in the formation of soil in two ways. One, water could get into the cracks in the rocks formed due to uneven heating by the Sun. If this water later freezes, it would cause the cracks to widen. Can you think why this should be so? Two, flowing water wears away even hard rock over long periods of time. Fast flowing water often carries big and small particles of rock downstream. These rocks rub against other rocks and the resultant abrasion causes the rocks to wear down into smaller and smaller particles. The water then takes these particles along with it and deposits it further down its path. Soil is thus found in places far away from its parent-rock.\nWind: In a process similar to the way in which water rubs against rocks and wears them down, strong winds also erode rocks down. The wind also carries sand from one place to the other like water does.\nLiving organisms also influence the formation of soil. The lichen that we read about earlier, also grows on the surface of rocks. While growing, they release certain substances that cause the rock surface to powder down and form a thin layer of soil. Other small plants like moss, are able to grow on this surface now and they cause the rock to break up further. The roots of big trees sometimes go into cracks in the rocks and as the roots grow bigger, the crack is forced bigger.\n\nAs you have seen, soil is a mixture. It contains small particles of rock (of different sizes). It also contains bits of decayed living organisms which is called humus. In addition, soil also contains various forms of microscopic life. The type of soil is decided by the average size of particles found in it and the quality of the soil is decided by the amount of humus and the microscopic organisms found in it. Humus is a major factor in deciding the soil structure because it causes the soil to become more porous and allows water and air to penetrate deep underground. The mineral nutrients that are found in a particular soil depends on the rocks it was formed from. The nutrient content of a soil, the amount of humus present in it and the depth of the soil are some of the factors that decide which plants will thrive on that soil. Thus, the topmost layer of the soil that contains humus and living organisms in addition to the soil particles is called the topsoil. The quality of the topsoil is an important factor that decides biodiversity in that area.\n\nModern farming practices involve the use of large amounts of fertilizers and pesticides. Use of these substances over long periods of time can destroy the soil structure by killing the soil micro-organisms that recycle nutrients in the soil. It also kills the Earthworms which are instrumental in making the rich humus. Fertile soil can quickly be turned barren if sustainable practices are not followed. Removal of useful components from the soil and addition of other substances, which adversely affect the fertility of the soil and kill the diversity of organisms that live in it, is called soil pollution.\n\nThe soil that we see today in one place has been created over a very long period of time. However, some of the factors that created the soil in the first place and brought the soil to that place may be responsible for the removal of the soil too. The fine particles of soil may be carried away by flowing water or wind. If all the soil gets washed away and the rocks underneath are exposed, we have lost a valuable resource because very little will grow on the rock.\n\nThe roots of plants have an important role in preventing soil erosion. The large-scale deforestation that is happening all over the world not only destroys biodiversity, it also leads to soil erosion. Topsoil that is bare of vegetation, is likely to be removed very quickly. And this is accelerated in hilly or mountainous regions. This process of soil erosion is very difficult to reverse. Vegetative cover on the ground has a role to play in the percolation of water into the deeper layers too.\n\nA constant interaction between the biotic and abiotic components of the biosphere makes it a dynamic, but stable system. These interactions consist of a transfer of matter and energy between the different components of the biosphere. Let us look at some processes involved in the maintenance of the above balance.\n\nYou have seen how the water evaporates from the water bodies and subsequent condensation of this water vapour leads to rain. But we don\u2019t see the seas and oceans drying up. So, how is the water returning to these water bodies? The whole process in which water evaporates and falls on the land as rain and later flows back into the sea via rivers is known as the water-cycle. This cycle is not as straight-forward and simple as this statement seems to imply. All of the water that falls on the land does not immediately flow back into the sea. Some of it seeps into the soil and becomes part of the underground reservoir of fresh-water. Some of this underground water finds its way to the surface through springs. Or we bring it to the surface for our use through wells or tube\u0002wells. Water is also used by terrestrial animals and plants for various life-processes.", "doc_id": "dbc2d1a2-4de2-11ed-8c29-0242ac110007"} {"source": "NCERT IX Science, India", "document": "Soil is an important resource that decides the diversity of life in an area. But what is the soil and how is it formed? The outermost layer of our Earth is called the crust and the minerals found in this layer supply a variety of nutrients to life-forms. But these minerals will not be available to the organisms if the minerals are bound up in huge rocks. Over long periods of time, thousands and millions of years, the rocks at or near the surface of the Earth are broken down by various physical, chemical and some biological processes. The end product of this breaking down is the fine particles of soil. But what are the factors or processes that make soil?\nThe Sun: The Sun heats up rocks during the day so that they expand. At night, these rocks cool down and contract. Since all parts of the rock do not expand and contract at the same rate, this results in the formation of cracks and ultimately the huge rocks break up into smaller pieces.\nWater: Water helps in the formation of soil in two ways. One, water could get into the cracks in the rocks formed due to uneven heating by the Sun. If this water later freezes, it would cause the cracks to widen. Can you think why this should be so? Two, flowing water wears away even hard rock over long periods of time. Fast flowing water often carries big and small particles of rock downstream. These rocks rub against other rocks and the resultant abrasion causes the rocks to wear down into smaller and smaller particles. The water then takes these particles along with it and deposits it further down its path. Soil is thus found in places far away from its parent-rock.\nWind: In a process similar to the way in which water rubs against rocks and wears them down, strong winds also erode rocks down. The wind also carries sand from one place to the other like water does.\nLiving organisms also influence the formation of soil. The lichen that we read about earlier, also grows on the surface of rocks. While growing, they release certain substances that cause the rock surface to powder down and form a thin layer of soil. Other small plants like moss, are able to grow on this surface now and they cause the rock to break up further. The roots of big trees sometimes go into cracks in the rocks and as the roots grow bigger, the crack is forced bigger.\n\nAs you have seen, soil is a mixture. It contains small particles of rock (of different sizes). It also contains bits of decayed living organisms which is called humus. In addition, soil also contains various forms of microscopic life. The type of soil is decided by the average size of particles found in it and the quality of the soil is decided by the amount of humus and the microscopic organisms found in it. Humus is a major factor in deciding the soil structure because it causes the soil to become more porous and allows water and air to penetrate deep underground. The mineral nutrients that are found in a particular soil depends on the rocks it was formed from. The nutrient content of a soil, the amount of humus present in it and the depth of the soil are some of the factors that decide which plants will thrive on that soil. Thus, the topmost layer of the soil that contains humus and living organisms in addition to the soil particles is called the topsoil. The quality of the topsoil is an important factor that decides biodiversity in that area.\n\nModern farming practices involve the use of large amounts of fertilizers and pesticides. Use of these substances over long periods of time can destroy the soil structure by killing the soil micro-organisms that recycle nutrients in the soil. It also kills the Earthworms which are instrumental in making the rich humus. Fertile soil can quickly be turned barren if sustainable practices are not followed. Removal of useful components from the soil and addition of other substances, which adversely affect the fertility of the soil and kill the diversity of organisms that live in it, is called soil pollution.\n\nThe soil that we see today in one place has been created over a very long period of time. However, some of the factors that created the soil in the first place and brought the soil to that place may be responsible for the removal of the soil too. The fine particles of soil may be carried away by flowing water or wind. If all the soil gets washed away and the rocks underneath are exposed, we have lost a valuable resource because very little will grow on the rock.\n\nThe roots of plants have an important role in preventing soil erosion. The large-scale deforestation that is happening all over the world not only destroys biodiversity, it also leads to soil erosion. Topsoil that is bare of vegetation, is likely to be removed very quickly. And this is accelerated in hilly or mountainous regions. This process of soil erosion is very difficult to reverse. Vegetative cover on the ground has a role to play in the percolation of water into the deeper layers too.\n\nA constant interaction between the biotic and abiotic components of the biosphere makes it a dynamic, but stable system. These interactions consist of a transfer of matter and energy between the different components of the biosphere. Let us look at some processes involved in the maintenance of the above balance.\n\nYou have seen how the water evaporates from the water bodies and subsequent condensation of this water vapour leads to rain. But we don\u2019t see the seas and oceans drying up. So, how is the water returning to these water bodies? The whole process in which water evaporates and falls on the land as rain and later flows back into the sea via rivers is known as the water-cycle. This cycle is not as straight-forward and simple as this statement seems to imply. All of the water that falls on the land does not immediately flow back into the sea. Some of it seeps into the soil and becomes part of the underground reservoir of fresh-water. Some of this underground water finds its way to the surface through springs. Or we bring it to the surface for our use through wells or tube\u0002wells. Water is also used by terrestrial animals and plants for various life-processes.", "doc_id": "f799da4c-4de2-11ed-8f0f-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "India is one of the ancient civilisations in the world. It has achieved multi-faceted socio\u0002economic progress during the last five decades. It has moved forward displaying remarkable progress in the field of agriculture, industry, technology and overall economic development. India has also contributed significantly to the making of world history.\n\nIndia is a vast country. Lying entirely in the Northern hemisphere (Figure 1.1) the main land extends between latitudes 8\u00b04'N and 37\u00b06'N and longitudes 68\u00b07'E and 97\u00b025'E. The Tropic of Cancer (23\u00b0 30'N) divides the country into almost two equal parts. To the southeast and southwest of the mainland, lie the Andaman and Nicobar islands and the Lakshadweep islands in Bay of Bengal and Arabian Sea respectively. Find out the extent of these groups of islands from your atlas.\n\nThe land mass of India has an area of 3.28 million square km. India\u2019s total area accounts for about 2.4 per cent of the total geographical area of the world. From Figure 1.2 it is clear that India is the seventh largest country of the world. India has a land boundary of about 15,200 km and the total length of the coastline of the mainland, including Andaman and Nicobar and Lakshadweep, is 7,516.6 km.\n\nIndia is bounded by the young fold mountains in the northwest, north and northeast. South of about 22\u00b0 north latitude, it begins to taper, and extends towards the Indian Ocean, dividing it into two seas, the Arabian Sea on the west and the Bay of Bengal on its east. Look at Figure 1.3 and note that the latitudinal and longitudinal extent of the mainland is about 30\u00b0. Despite this fact, the east-west extent appears to be smaller than the north-south extent.\n\nFrom Gujarat to Arunachal Pradesh, there is a time lag of two hours. Hence, time along the Standard Meridian of India (82\u00b030'E) passing through Mirzapur (in Uttar Pradesh) is taken as the standard time for the whole country. The latitudinal extent influences the duration of day and night, as one moves from south to north.\n\nThe Indian landmass has a central location between the East and the West Asia. India is a southward extension of the Asian continent. The trans Indian Ocean routes, which connect the countries of Europe in the West and the countries of East Asia, provide a strategic central location to India. Note that the Deccan Peninsula protrudes into the Indian Ocean, thus helping India to establish close contact with West Asia, Africa and Europe from the western coast and with Southeast and East Asia from the eastern coast. No other country has a long coastline on the Indian Ocean as India has and indeed, it is India\u2019s eminent position in the Indian Ocean, which justifies the naming of an Ocean after it.\n\nIndia\u2019s contacts with the World have continued through ages but her relationships through the land routes are much older than her maritime contacts. The various passes across the mountains in the north have provided passages to the ancient travellers, while the oceans restricted such interaction for a long time. These routes have contributed in the exchange of ideas and commodities since ancient times. The ideas of the Upanishads and the Ramayana, the stories of Panchtantra, the Indian numerals and the decimal system thus could reach many parts of the world. The spices, muslin and other merchandise were taken from India to different countries. On the other hand, the influence of Greek sculpture, and the architectural styles of dome and minarets from West Asia can be seen in different parts of our country.", "doc_id": "66c25998-4de4-11ed-9521-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "The Brahmaputra rises in Tibet east of Mansarowar lake very close to the sources of the Indus and the Satluj. It is slightly longer than the Indus, and most of its course lies outside India. It flows eastwards parallel to the Himalayas. On reaching the Namcha Barwa (7757 m), it takes a \u2018U\u2019 turn and enters India in Arunachal Pradesh through a gorge. Here, it is called the Dihang and it is joined by the Dibang, the Lohit, and many other tributaries to form the Brahmaputra in Assam.\n\nIn Tibet, the river carries a smaller volume of water and less silt as it is a cold and a dry area. In India, it passes through a region of high rainfall. Here the river carries a large volume of water and considerable amount of silt. The Brahmaputra has a braided channel in its entire length in Assam and forms many riverine islands. Do you remember the name of the world\u2019s largest riverine island formed by the Brahmaputra?\n\nEvery year during the rainy season, the river overflows its banks, causing widespread devastation due to floods in Assam and Bangladesh. Unlike other north Indian rivers, the Brahmaputra is marked by huge deposits of silt on its bed causing the riverbed to rise. The river also shifts its channel frequently.\n\nThe main water divide in Peninsular India is formed by the Western Ghats, which runs from north to south close to the western coast. Most of the major rivers of the Peninsula, such as the Mahanadi, the Godavari, the Krishna and the Kaveri flow eastwards and drain into the Bay of Bengal. These rivers make deltas at their mouths. There are numerous small streams flowing west of the Western Ghats. The Narmada and the Tapi are the only long rivers, which flow west and make esturies. The drainage basins of the peninsular rivers are comparatively smaller in size.\n\nThe Narmada rises in the Amarkantak hills in Madhya Pradesh. It flows towards the west in a rift valley formed due to faulting. On its way to the sea, the Narmada creates many picturesque locations. The \u2018Marble rocks\u2019, near Jabalpur, where the Narmada flows through a deep gorge, and the \u2018Dhuadhar falls, where the river plunges over steep rocks, are some of the notable ones. All tributaries of the Narmada are very short and most of these join the main stream at right angles. The Narmada basin covers parts of Madhya Pradesh and Gujarat.\n\nThe Tapi rises in the Satpura ranges, in the Betul district of Madhya Pradesh. It also flows in a rift valley parallel to the Narmada but it is such shorter in length. Its basin covers parts of Madhya Pradesh, Gujarat and Maharashtra. The coastal plains between Western Ghats and the Arabian Sea are very narrow. Hence, the coastal rivers are short. The main west flowing rivers are Sabarmati, Mahi, Bharathpuzha and Periyar. Find out the states in which these rivers drain the water.\n\nThe Godavari is the largest Peninsular river. It rises from the slopes of the Western Ghats in the Nasik district of Maharashtra. Its length is about 1500 km. It drains into the Bay of Bengal. Its drainage basin is also the largest among the peninsular rivers. The basin covers parts of Maharashtra (about 50 per cent of the basin area lies in Maharashtra), Madhya Pradesh, Odisha and Andhra Pradesh. The Godavari is joined by a number of tributaries, such as the Purna, the Wardha, the Pranhita, the Manjra, the Wainganga and the Penganga. The last three tributaries are very large. Because of its length and the area it covers, it is also known as the Dakshin Ganga.\n\nThe Mahanadi rises in the highlands of Chhattisgarh. It flows through Odisha to reach the Bay of Bengal. The length of the river is about 860 km. Its drainage basin is shared by Maharashtra, Chhattisgarh, Jharkhand, and Odisha.\n\nRising from a spring near Mahabaleshwar, the Krishna flows for about 1400 km and reaches the Bay of Bengal. The Tungabhadra, the Koyana, the Ghatprabha, the Musi and the Bhima are some of its tributaries. Its drainage basin is shared by Maharasthra, Karnataka and Andhra Pradesh.\n\nThe Kaveri rises in the Brahmagri range of the Western Ghats and it reaches the Bay of Bengal in south of Cuddalore in Tamil Nadu. The total length of the river is about 760 km. Its main tributaries are Amravati, Bhavani, Hemavati and Kabini. Its basin drains parts of Karnataka, Kerala and Tamil Nadu.\n\nBesides these major rivers, there are some smaller rivers flowing towards the east. The Damoder, the Brahmani, the Baitarni and the Subarnrekha are some notable examples.", "doc_id": "f892a1e2-4de5-11ed-8390-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "The Brahmaputra rises in Tibet east of Mansarowar lake very close to the sources of the Indus and the Satluj. It is slightly longer than the Indus, and most of its course lies outside India. It flows eastwards parallel to the Himalayas. On reaching the Namcha Barwa (7757 m), it takes a \u2018U\u2019 turn and enters India in Arunachal Pradesh through a gorge. Here, it is called the Dihang and it is joined by the Dibang, the Lohit, and many other tributaries to form the Brahmaputra in Assam.\n\nIn Tibet, the river carries a smaller volume of water and less silt as it is a cold and a dry area. In India, it passes through a region of high rainfall. Here the river carries a large volume of water and considerable amount of silt. The Brahmaputra has a braided channel in its entire length in Assam and forms many riverine islands. Do you remember the name of the world\u2019s largest riverine island formed by the Brahmaputra?\n\nEvery year during the rainy season, the river overflows its banks, causing widespread devastation due to floods in Assam and Bangladesh. Unlike other north Indian rivers, the Brahmaputra is marked by huge deposits of silt on its bed causing the riverbed to rise. The river also shifts its channel frequently.\n\nThe main water divide in Peninsular India is formed by the Western Ghats, which runs from north to south close to the western coast. Most of the major rivers of the Peninsula, such as the Mahanadi, the Godavari, the Krishna and the Kaveri flow eastwards and drain into the Bay of Bengal. These rivers make deltas at their mouths. There are numerous small streams flowing west of the Western Ghats. The Narmada and the Tapi are the only long rivers, which flow west and make esturies. The drainage basins of the peninsular rivers are comparatively smaller in size.\n\nThe Narmada rises in the Amarkantak hills in Madhya Pradesh. It flows towards the west in a rift valley formed due to faulting. On its way to the sea, the Narmada creates many picturesque locations. The \u2018Marble rocks\u2019, near Jabalpur, where the Narmada flows through a deep gorge, and the \u2018Dhuadhar falls, where the river plunges over steep rocks, are some of the notable ones. All tributaries of the Narmada are very short and most of these join the main stream at right angles. The Narmada basin covers parts of Madhya Pradesh and Gujarat.\n\nThe Tapi rises in the Satpura ranges, in the Betul district of Madhya Pradesh. It also flows in a rift valley parallel to the Narmada but it is such shorter in length. Its basin covers parts of Madhya Pradesh, Gujarat and Maharashtra. The coastal plains between Western Ghats and the Arabian Sea are very narrow. Hence, the coastal rivers are short. The main west flowing rivers are Sabarmati, Mahi, Bharathpuzha and Periyar. Find out the states in which these rivers drain the water.\n\nThe Godavari is the largest Peninsular river. It rises from the slopes of the Western Ghats in the Nasik district of Maharashtra. Its length is about 1500 km. It drains into the Bay of Bengal. Its drainage basin is also the largest among the peninsular rivers. The basin covers parts of Maharashtra (about 50 per cent of the basin area lies in Maharashtra), Madhya Pradesh, Odisha and Andhra Pradesh. The Godavari is joined by a number of tributaries, such as the Purna, the Wardha, the Pranhita, the Manjra, the Wainganga and the Penganga. The last three tributaries are very large. Because of its length and the area it covers, it is also known as the Dakshin Ganga.\n\nThe Mahanadi rises in the highlands of Chhattisgarh. It flows through Odisha to reach the Bay of Bengal. The length of the river is about 860 km. Its drainage basin is shared by Maharashtra, Chhattisgarh, Jharkhand, and Odisha.\n\nRising from a spring near Mahabaleshwar, the Krishna flows for about 1400 km and reaches the Bay of Bengal. The Tungabhadra, the Koyana, the Ghatprabha, the Musi and the Bhima are some of its tributaries. Its drainage basin is shared by Maharasthra, Karnataka and Andhra Pradesh.\n\nThe Kaveri rises in the Brahmagri range of the Western Ghats and it reaches the Bay of Bengal in south of Cuddalore in Tamil Nadu. The total length of the river is about 760 km. Its main tributaries are Amravati, Bhavani, Hemavati and Kabini. Its basin drains parts of Karnataka, Kerala and Tamil Nadu.\n\nBesides these major rivers, there are some smaller rivers flowing towards the east. The Damoder, the Brahmani, the Baitarni and the Subarnrekha are some notable examples.\n\nRivers have been of fundamental importance throughout the human history. Water from rivers is a basic natural resource, essential for various human activities. Therefore, riverbanks have attracted settlers from ancient times. These settlements have now become big cities. Make a list of cities in your state which are located on the bank of a river. Using rivers for irrigation, navigation, hydro-power generation is of special significance \u2014 particularly to a country like India, where agriculture is the major source of livelihood of the majority of its population.", "doc_id": "2bdf8be6-4de6-11ed-9c23-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "Climate refers to the sum total of weather conditions and variations over a large area for long period of time (more than thirty years). Weather refers to the state of the atmosphere over an area at any point of time. The elements of weather and climate are the same, i.e. temperature, atmospheric pressure, wind, humidity and precipitation. You may have observed that the weather conditions fluctuate very often even within a day. But there is some common pattern over a few weeks or months, i.e. days are cool or hot, windy or calm, cloudy or bright, and wet or dry. On the basis of the generalised monthly atmospheric conditions, the year is divided into seasons such as winter, summer or rainy seasons.\n\nThe climate of India is described as the \u2018monsoon\u2019 type. In Asia, this type of climate is found mainly in the south and the southeast. Despite an overall unity in the general pattern, there are perceptible regional variations in climatic conditions within the country. Let us take two important elements \u2013 temperature and precipitation, and examine how they vary from place to place and season to season. In summer, the mercury occasionally touches 50\u00b0C in some parts of the Rajasthan desert, whereas it may be around 20\u00b0C in Pahalgam in Jammu and Kashmir. On a winter night, temperature at Drass in Jammu and Kashmir may be as low as minus 45\u00b0C. Thiruvananthapuram, on the other hand, may have a temperature of 22\u00b0C.\n\nLet us now look at precipitation. There are variations not only in the form and types of precipitation but also in its amount and the seasonal distribution. While precipitation is mostly in the form of snowfall in the upper parts of Himalayas, it rains over the rest of the country. The annual precipitation varies from over 400 cm in Meghalaya to less than 10 cm in Ladakh and western Rajasthan. Most parts of the country receive rainfall from June to September. But some parts like the Tamil Nadu coast gets a large portion of its rain during October and November. \n\nIn general, coastal areas experience less contrasts in temperature conditions. Seasonal contrasts are more in the interior of the country. There is decrease in rainfall generally from east to west in the Northern Plains. These variations have given rise to variety in lives of people \u2013 in terms of the food they eat, the clothes they wear and also the kind of houses they live in.\n\nThere are six major controls of the climate of any place. They are: latitude, altitude, pressure and wind system, distance from the sea (continentality), ocean currents and relief features.\n\nDue to the curvature of the earth, the amount of solar energy received varies according to latitude. As a result, air temperature generally decreases from the equator towards the poles. As one goes from the surface of the earth to higher altitudes, the atmosphere becomes less dense and temperature decreases. The hills are therefore cooler during summers. The pressure and wind system of any area depend on the latitude and altitude of the place. Thus it influences the temperature and rainfall pattern. The sea exerts a moderating influence on climate: As the distance from the sea increases, its moderating influence decreases and the people experience extreme weather conditions. This condition is known as continentality (i.e. very hot during summers and very cold during winters). Ocean currents along with onshore winds affect the climate of the coastal areas, For example, any coastal area with warm or cold currents flowing past it, will be warmed or cooled if the winds are onshore.\n\nFinally, relief too plays a major role in determining the climate of a place. High mountains act as barriers for cold or hot winds; they may also cause precipitation if they are high enough and lie in the path of rain-bearing winds. The leeward side of mountains remains relatively dry.\n\nThe Tropic of Cancer passes through the middle of the country from the Rann of Kuchchh in the west to Mizoram in the east. Almost half of the country, lying south of the Tropic of Cancer, belongs to the tropical area. All the remaining area, north of the Tropic, lies in the sub-tropics. Therefore, India\u2019s climate has characteristics of tropical as well as subtropical climates.\n\nIndia has mountains to the north, which have an average height of about 6,000 metres. India also has a vast coastal area where the maximum elevation is about 30 metres. The Himalayas prevent the cold winds from Central Asia from entering the subcontinent. It is because of these mountains that this subcontinent experiences comparatively milder winters as compared to central Asia.\n\nThe climate and associated weather conditions in India are governed by the following atmospheric conditions:\nPressure and surface winds;\nUpper air circulation;\nWestern cyclonic disturbances and tropical cyclones. \nIndia lies in the region of north easterly winds. These winds originate from the subtropical high-pressure belt of the northern hemisphere. They blow southwards, get deflected to the right due to the Coriolis force, and move towards the equatorial low-pressure area. Generally, these winds carry little moisture as they originate and blow over land. Therefore, they bring little or no rain. Hence, India should have been an arid land, but it is not so. Let us see why?", "doc_id": "6b8cd53a-4de8-11ed-b42e-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "The climate of India is described as the \u2018monsoon\u2019 type. In Asia, this type of climate is found mainly in the south and the southeast. Despite an overall unity in the general pattern, there are perceptible regional variations in climatic conditions within the country. Let us take two important elements \u2013 temperature and precipitation, and examine how they vary from place to place and season to season. In summer, the mercury occasionally touches 50\u00b0C in some parts of the Rajasthan desert, whereas it may be around 20\u00b0C in Pahalgam in Jammu and Kashmir. On a winter night, temperature at Drass in Jammu and Kashmir may be as low as minus 45\u00b0C. Thiruvananthapuram, on the other hand, may have a temperature of 22\u00b0C.\n\nLet us now look at precipitation. There are variations not only in the form and types of precipitation but also in its amount and the seasonal distribution. While precipitation is mostly in the form of snowfall in the upper parts of Himalayas, it rains over the rest of the country. The annual precipitation varies from over 400 cm in Meghalaya to less than 10 cm in Ladakh and western Rajasthan. Most parts of the country receive rainfall from June to September. But some parts like the Tamil Nadu coast gets a large portion of its rain during October and November. \n\nIn general, coastal areas experience less contrasts in temperature conditions. Seasonal contrasts are more in the interior of the country. There is decrease in rainfall generally from east to west in the Northern Plains. These variations have given rise to variety in lives of people \u2013 in terms of the food they eat, the clothes they wear and also the kind of houses they live in.\n\nThere are six major controls of the climate of any place. They are: latitude, altitude, pressure and wind system, distance from the sea (continentality), ocean currents and relief features.\n\nDue to the curvature of the earth, the amount of solar energy received varies according to latitude. As a result, air temperature generally decreases from the equator towards the poles. As one goes from the surface of the earth to higher altitudes, the atmosphere becomes less dense and temperature decreases. The hills are therefore cooler during summers. The pressure and wind system of any area depend on the latitude and altitude of the place. Thus it influences the temperature and rainfall pattern. The sea exerts a moderating influence on climate: As the distance from the sea increases, its moderating influence decreases and the people experience extreme weather conditions. This condition is known as continentality (i.e. very hot during summers and very cold during winters). Ocean currents along with onshore winds affect the climate of the coastal areas, For example, any coastal area with warm or cold currents flowing past it, will be warmed or cooled if the winds are onshore.\n\nFinally, relief too plays a major role in determining the climate of a place. High mountains act as barriers for cold or hot winds; they may also cause precipitation if they are high enough and lie in the path of rain-bearing winds. The leeward side of mountains remains relatively dry.\n\nThe Tropic of Cancer passes through the middle of the country from the Rann of Kuchchh in the west to Mizoram in the east. Almost half of the country, lying south of the Tropic of Cancer, belongs to the tropical area. All the remaining area, north of the Tropic, lies in the sub-tropics. Therefore, India\u2019s climate has characteristics of tropical as well as subtropical climates.\n\nIndia has mountains to the north, which have an average height of about 6,000 metres. India also has a vast coastal area where the maximum elevation is about 30 metres. The Himalayas prevent the cold winds from Central Asia from entering the subcontinent. It is because of these mountains that this subcontinent experiences comparatively milder winters as compared to central Asia.\n\nThe climate and associated weather conditions in India are governed by the following atmospheric conditions:\nPressure and surface winds;\nUpper air circulation;\nWestern cyclonic disturbances and tropical cyclones. \nIndia lies in the region of north easterly winds. These winds originate from the subtropical high-pressure belt of the northern hemisphere. They blow southwards, get deflected to the right due to the Coriolis force, and move towards the equatorial low-pressure area. Generally, these winds carry little moisture as they originate and blow over land. Therefore, they bring little or no rain. Hence, India should have been an arid land, but it is not so. Let us see why?\n\nThe pressure and wind conditions over India are unique. During winter, there is a high-pressure area north of the Himalayas. Cold dry winds blow from this region to the low-pressure areas over the oceans to the south. In summer, a low-pressure area develops over interior Asia, as well as, over northwestern India. This causes a complete reversal of the direction of winds during summer. Air moves from the high-pressure area over the southern Indian Ocean, in a south-easterly direction, crosses the equator, and turns right towards the low-pressure areas over the Indian subcontinent. These are known as the Southwest Monsoon winds. These winds blow over the warm oceans, gather moisture and bring widespread rainfall over the mainland of India.\n\nThe upper air circulation in this region is dominated by a westerly flow. An important component of this flow is the jet stream. These jet streams are located approximately over 27\u00b0-30\u00b0 north latitude, therefore, they are known as subtropical westerly jet streams. Over India, these jet streams blow south of the Himalayas, all through the year except in summer. The western cyclonic disturbances experienced in the north and north-western parts of the country are brought in by this westerly flow. In summer, the subtropical westerly jet stream moves north of the Himalayas with the apparent movement of the sun. An easterly jet stream, called the sub-tropical easterly jet stream blows over peninsular India, approximately over 14\u00b0N during the summer months.", "doc_id": "8b1b1c9a-4de8-11ed-b18b-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "The inflow of the south-west monsoon into India brings about a total change in the weather. Early in the season, the windward side of the Western Ghats receives very heavy rainfall, more than 250 cm. The Deccan Plateau and parts of Madhya Pradesh also receive some amount of rain in spite of lying in the rain shadow area. The maximum rainfall of this season is received in the north-eastern part of the country. Mawsynram in the southern ranges of the Khasi Hills receives the highest average rainfall in the world. Rainfall in the Ganga valley decreases from the east to the west. Rajasthan and parts of Gujarat get scanty rainfall.\n\nAnother phenomenon associated with the monsoon is its tendency to have \u2018breaks\u2019 in rainfall. Thus, it has wet and dry spells. In other words, the monsoon rains take place only for a few days at a time. They are interspersed with rainless intervals. These breaks in monsoon are related to the movement of the monsoon trough. For various reasons, the trough and its axis keep on moving northward or southward, which determines the spatial distribution of rainfall. When the axis of the monsoon trough lies over the plains, rainfall is good in these parts. On the other hand, whenever the axis shifts closer to the Himalayas, there are longer dry spells in the plains, and widespread rain occur in the mountainous catchment areas of the Himalayan rivers. These heavy rains bring in their wake, devastating floods causing damage to life and property in the plains. The frequency and intensity of tropical depressions too, determine the amount and duration of monsoon rains. These depressions form at the head of the Bay of Bengal and cross over to the mainland. The depressions follow the axis of the \u201cmonsoon trough of low pressure\u201d. The monsoon is known for its uncertainties. The alternation of dry and wet spells vary in intensity, frequency and duration. While it causes heavy floods in one part, it may be responsible for droughts in the other. It is often irregular in its arrival and its retreat. Hence, it sometimes disturbs the farming schedule of millions of farmers all over the country.\n\nDuring October-November, with the apparent movement of the sun towards the south, the monsoon trough or the low-pressure trough over the northern plains becomes weaker. This is gradually replaced by a high-pressure system. The south-west monsoon winds weaken and start withdrawing gradually. By the beginning of October, the monsoon withdraws from the Northern Plains.\n\nThe months of October-November form a period of transition from hot rainy season to dry winter conditions. The retreat of the monsoon is marked by clear skies and rise in temperature. While day temperatures are high, nights are cool and pleasant. The land is still moist. Owing to the conditions of high temperature and humidity, the weather becomes rather oppressive during the day. This is commonly known as \u2018October heat\u2019. In the second half of October, the mercury begins to fall rapidly in northern India.\n\nThe low-pressure conditions, over north\u0002western India, get transferred to the Bay of Bengal by early November. This shift is associated with the occurrence of cyclonic depressions, which originate over the Andaman Sea. These cyclones generally cross the eastern coasts of India cause heavy and widespread rain. These tropical cyclones are often very destructive. The thickly populated deltas of the Godavari, the Krishna and the Kaveri are frequently struck by cyclones, which cause great damage to life and property. Sometimes, these cyclones arrive at the coasts of Odisha, West Bengal and Bangladesh. The bulk of the rainfall of the Coromandel Coast is derived from depressions and cyclones.\n\nParts of western coast and northeastern India receive over about 400 cm of rainfall annually. However, it is less than 60 cm in western Rajasthan and adjoining parts of Gujarat, Haryana and Punjab. Rainfall is equally low in the interior of the Deccan plateau, and east of the Sahyadris. Why do these regions receive low rainfall? A third area of low precipitation is around Leh in Jammu and Kashmir. The rest of the country receives moderate rainfall. Snowfall is restricted to the Himalayan region.\n\n Owing to the nature of monsoons, the annual rainfall is highly variable from year to year. Variability is high in the regions of low rainfall, such as parts of Rajasthan, Gujarat and the leeward side of the Western Ghats. As such, while areas of high rainfall are liable to be affected by floods, areas of low rainfall are drought-prone.\n\nYou have already known the way the Himalayas protect the subcontinent from extremely cold winds from central Asia. This enables northern India to have uniformly higher temperatures compared to other areas on the same latitudes. Similarly, the Peninsular plateau, under the influence of the sea from three sides, has moderate temperatures. Despite such moderating influences, there are great variations in the temperature conditions. Nevertheless, the unifying influence of the monsoon on the Indian subcontinent is quite perceptible. The seasonal alteration of the wind systems and the associated weather conditions provide a rhythmic cycle of seasons. Even the uncertainties of rain and uneven distribution are very much typical of the monsoons. The Indian landscape, its animal and plant life, its entire agricultural calendar and the life of the people, including their festivities, revolve around this phenomenon. Year after year, people of India from north to south and from east to west, eagerly await the arrival of the monsoon. These monsoon winds bind the whole country by providing water to set the agricultural activities in motion. The river valleys which carry this water also unite as a single river valley unit.", "doc_id": "aaa6a808-4de9-11ed-83a9-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "The inflow of the south-west monsoon into India brings about a total change in the weather. Early in the season, the windward side of the Western Ghats receives very heavy rainfall, more than 250 cm. The Deccan Plateau and parts of Madhya Pradesh also receive some amount of rain in spite of lying in the rain shadow area. The maximum rainfall of this season is received in the north-eastern part of the country. Mawsynram in the southern ranges of the Khasi Hills receives the highest average rainfall in the world. Rainfall in the Ganga valley decreases from the east to the west. Rajasthan and parts of Gujarat get scanty rainfall.\n\nAnother phenomenon associated with the monsoon is its tendency to have \u2018breaks\u2019 in rainfall. Thus, it has wet and dry spells. In other words, the monsoon rains take place only for a few days at a time. They are interspersed with rainless intervals. These breaks in monsoon are related to the movement of the monsoon trough. For various reasons, the trough and its axis keep on moving northward or southward, which determines the spatial distribution of rainfall. When the axis of the monsoon trough lies over the plains, rainfall is good in these parts. On the other hand, whenever the axis shifts closer to the Himalayas, there are longer dry spells in the plains, and widespread rain occur in the mountainous catchment areas of the Himalayan rivers. These heavy rains bring in their wake, devastating floods causing damage to life and property in the plains. The frequency and intensity of tropical depressions too, determine the amount and duration of monsoon rains. These depressions form at the head of the Bay of Bengal and cross over to the mainland. The depressions follow the axis of the \u201cmonsoon trough of low pressure\u201d. The monsoon is known for its uncertainties. The alternation of dry and wet spells vary in intensity, frequency and duration. While it causes heavy floods in one part, it may be responsible for droughts in the other. It is often irregular in its arrival and its retreat. Hence, it sometimes disturbs the farming schedule of millions of farmers all over the country.\n\nDuring October-November, with the apparent movement of the sun towards the south, the monsoon trough or the low-pressure trough over the northern plains becomes weaker. This is gradually replaced by a high-pressure system. The south-west monsoon winds weaken and start withdrawing gradually. By the beginning of October, the monsoon withdraws from the Northern Plains.\n\nThe months of October-November form a period of transition from hot rainy season to dry winter conditions. The retreat of the monsoon is marked by clear skies and rise in temperature. While day temperatures are high, nights are cool and pleasant. The land is still moist. Owing to the conditions of high temperature and humidity, the weather becomes rather oppressive during the day. This is commonly known as \u2018October heat\u2019. In the second half of October, the mercury begins to fall rapidly in northern India.\n\nThe low-pressure conditions, over north\u0002western India, get transferred to the Bay of Bengal by early November. This shift is associated with the occurrence of cyclonic depressions, which originate over the Andaman Sea. These cyclones generally cross the eastern coasts of India cause heavy and widespread rain. These tropical cyclones are often very destructive. The thickly populated deltas of the Godavari, the Krishna and the Kaveri are frequently struck by cyclones, which cause great damage to life and property. Sometimes, these cyclones arrive at the coasts of Odisha, West Bengal and Bangladesh. The bulk of the rainfall of the Coromandel Coast is derived from depressions and cyclones.\n\nParts of western coast and northeastern India receive over about 400 cm of rainfall annually. However, it is less than 60 cm in western Rajasthan and adjoining parts of Gujarat, Haryana and Punjab. Rainfall is equally low in the interior of the Deccan plateau, and east of the Sahyadris. Why do these regions receive low rainfall? A third area of low precipitation is around Leh in Jammu and Kashmir. The rest of the country receives moderate rainfall. Snowfall is restricted to the Himalayan region.\n\n Owing to the nature of monsoons, the annual rainfall is highly variable from year to year. Variability is high in the regions of low rainfall, such as parts of Rajasthan, Gujarat and the leeward side of the Western Ghats. As such, while areas of high rainfall are liable to be affected by floods, areas of low rainfall are drought-prone.\n\nYou have already known the way the Himalayas protect the subcontinent from extremely cold winds from central Asia. This enables northern India to have uniformly higher temperatures compared to other areas on the same latitudes. Similarly, the Peninsular plateau, under the influence of the sea from three sides, has moderate temperatures. Despite such moderating influences, there are great variations in the temperature conditions. Nevertheless, the unifying influence of the monsoon on the Indian subcontinent is quite perceptible. The seasonal alteration of the wind systems and the associated weather conditions provide a rhythmic cycle of seasons. Even the uncertainties of rain and uneven distribution are very much typical of the monsoons. The Indian landscape, its animal and plant life, its entire agricultural calendar and the life of the people, including their festivities, revolve around this phenomenon. Year after year, people of India from north to south and from east to west, eagerly await the arrival of the monsoon. These monsoon winds bind the whole country by providing water to set the agricultural activities in motion. The river valleys which carry this water also unite as a single river valley unit.", "doc_id": "c0b01ec2-4de9-11ed-b568-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "The absolute numbers added each year or decade is the magnitude of increase. It is obtained by simply subtracting the earlier population (e.g. that of 2001) from the later population (e.g. that of 2011). It is referred to as the absolute increase. The rate or the pace of population increase is the other important aspect. It is studied in per cent per annum, e.g. a rate of increase of 2 per cent per annum means that in a given year, there was an increase of two persons for every 100 persons in the base population. This is referred to as the annual growth rate. India\u2019s population has been steadily increasing from 361 million in 1951 to 1210 million in 2011.\n\nSince 1981, however, the rate of growth started declining gradually. During this period, birth rates declined rapidly. Still 182 million people were added to the total population in the 1990s alone (an annual addition larger than ever before).\n\nIt is essential to realise that India has a very large population. When a low annual rate is applied to a very large population, it yields a large absolute increase. When more than a billion people increase even at a lower rate, the total number being added becomes very large. India\u2019s annual increase in population is large enough to neutralise efforts to conserve the resource endowment and environment.\n\nThe declining trend of the growth rate is indeed a positive indicator of the efforts of birth control. Despite that, the total additions to the population base continue to grow, and India may overtake China in 2045 to become the most populous country in the world.\n\nThere are three main processes of change of population : birth rates, death rates and migration.\n\nThe natural increase of population is the difference between birth rates and death rates. Birth rate is the number of live births per thousand persons in a year. It is a major component of growth because in India, birth rates have always been higher than death rates.\n\nDeath rate is the number of deaths per thousand persons in a year. The main cause of the rate of growth of the Indian population has been the rapid decline in death rates.\n\nTill 1980, high birth rates and declining death rates led to a large difference between birth rates and death rates resulting in higher rates of population growth. Since 1981, birth rates have also started declining gradually, resulting in a gradual decline in the rate of population growth. What are the reasons for this trend?\n\nThe third component of population growth is migration. Migration is the movement of people across regions and territories. Migration can be internal (within the country) or international (between the countries). Internal migration does not change the size of the population, but influences the distribution of population within the nation. Migration plays a very significant role in changing the composition and distribution of population. In India, most migrations have been from rural to urban areas because of the \u201cpush\u201d factor in rural areas. These are adverse conditions of poverty and unemployment in the rural areas and the \u201cpull\u201d of the city in terms of increased employment opportunities and better living conditions.\n\nMigration is an important determinant of population change. It changes not only the population size but also the population composition of urban and rural populations in terms of age and sex composition. In India, the rural-urban migration has resulted in a steady increase in the percentage of population in cities and towns. The urban population has increased from 17.29 per cent of the total population in 1951 to 31.80 per cent in 2011. There has been a significant increase in the number of \u2018million plus cities\u2019 from 35 to 53 in just one decade, i.e., 2001 to 2011.\n\nThe age composition of a population refers to the number of people in different age groups in a country. It is one of the most basic characteristics of a population. To an important degree, a person\u2019s age influences what he/she needs, buys, does and his/her capacity to perform. Consequently, the number and percentage of a population found within the children, working age and aged groups are notable determinants of the population\u2019s social and economic structure.\n\nThe population of a nation is, generally, grouped into three broad categories:\nChildren (generally below 15 years) \nThey are economically unproductive and need to be provided with food, clothing, education and medical care.\nWorking Age (15\u201359 years)\nThey are economically productive and biologically reproductive. They comprise the working population.\nAged (Above 59 years)\nThey can be economically productive though they may have retired. They may be working voluntarily but they are not available for employment through recruitment. The percentage of children and the aged affect the dependency ratio because these groups are not producers. The proportion of the three groups in India\u2019s population is already presented in Figure 6.5.", "doc_id": "9e854e7e-4df0-11ed-9be0-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "Population is a dynamic phenomenon. The numbers, distribution and composition of the population are constantly changing. This is the influence of the interaction of the three processes, namely \u2014 births, deaths and migrations.\n\nGrowth of population refers to the change in the number of inhabitants of a country/territory during a specific period of time, say during the last 10 years. Such a change can be expressed in two ways: in terms of absolute numbers and in terms of percentage change per year.\n\nThe absolute numbers added each year or decade is the magnitude of increase. It is obtained by simply subtracting the earlier population (e.g. that of 2001) from the later population (e.g. that of 2011). It is referred to as the absolute increase. The rate or the pace of population increase is the other important aspect. It is studied in per cent per annum, e.g. a rate of increase of 2 per cent per annum means that in a given year, there was an increase of two persons for every 100 persons in the base population. This is referred to as the annual growth rate. India\u2019s population has been steadily increasing from 361 million in 1951 to 1210 million in 2011.\n\nSince 1981, however, the rate of growth started declining gradually. During this period, birth rates declined rapidly. Still 182 million people were added to the total population in the 1990s alone (an annual addition larger than ever before).\n\nIt is essential to realise that India has a very large population. When a low annual rate is applied to a very large population, it yields a large absolute increase. When more than a billion people increase even at a lower rate, the total number being added becomes very large. India\u2019s annual increase in population is large enough to neutralise efforts to conserve the resource endowment and environment.\n\nThe declining trend of the growth rate is indeed a positive indicator of the efforts of birth control. Despite that, the total additions to the population base continue to grow, and India may overtake China in 2045 to become the most populous country in the world.\n\nThere are three main processes of change of population : birth rates, death rates and migration.\n\nThe natural increase of population is the difference between birth rates and death rates. Birth rate is the number of live births per thousand persons in a year. It is a major component of growth because in India, birth rates have always been higher than death rates.\n\nDeath rate is the number of deaths per thousand persons in a year. The main cause of the rate of growth of the Indian population has been the rapid decline in death rates.\n\nTill 1980, high birth rates and declining death rates led to a large difference between birth rates and death rates resulting in higher rates of population growth. Since 1981, birth rates have also started declining gradually, resulting in a gradual decline in the rate of population growth. What are the reasons for this trend?\n\nThe third component of population growth is migration. Migration is the movement of people across regions and territories. Migration can be internal (within the country) or international (between the countries). Internal migration does not change the size of the population, but influences the distribution of population within the nation. Migration plays a very significant role in changing the composition and distribution of population. In India, most migrations have been from rural to urban areas because of the \u201cpush\u201d factor in rural areas. These are adverse conditions of poverty and unemployment in the rural areas and the \u201cpull\u201d of the city in terms of increased employment opportunities and better living conditions.\n\nMigration is an important determinant of population change. It changes not only the population size but also the population composition of urban and rural populations in terms of age and sex composition. In India, the rural-urban migration has resulted in a steady increase in the percentage of population in cities and towns. The urban population has increased from 17.29 per cent of the total population in 1951 to 31.80 per cent in 2011. There has been a significant increase in the number of \u2018million plus cities\u2019 from 35 to 53 in just one decade, i.e., 2001 to 2011.\n\nThe age composition of a population refers to the number of people in different age groups in a country. It is one of the most basic characteristics of a population. To an important degree, a person\u2019s age influences what he/she needs, buys, does and his/her capacity to perform. Consequently, the number and percentage of a population found within the children, working age and aged groups are notable determinants of the population\u2019s social and economic structure.", "doc_id": "ed1cefce-4df0-11ed-af98-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "Primary activities include agriculture, animal husbandry, forestry, fishing, mining and quarrying, etc. Secondary activities include manufacturing industry, building and construction work, etc. Tertiary activities include transport, communications, commerce, administration and other services. The proportion of people working in different activities varies in developed and developing countries. Developed nations have a high proportion of people in secondary, and tertiary activities. Developing countries tend to have a higher proportion of their workforce engaged in primary activities. In India, about 64 per cent of the population is engaged only in agriculture. The proportion of population dependent on secondary and tertiary sectors is about 13 and 20 per cent respectively. There has been an occupational shift in favour of secondary and tertiary sectors because of growing industrialisation and urbanisation in recent times.\n\nHealth is an important component of population composition, which affects the process of development. Sustained efforts of government programmes have registered significant improvements in the health conditions of the Indian population. Death rates have declined from 25 per 1000 population in 1951 to 7.2 per 1000 in 2011 and life expectancy at birth has increased from 36.7 years in 1951 to 67.9 years in 2012. The substantial improvement is the result of many factors including improvement in public health, prevention of infectious diseases and application of modern medical practices in diagnosis and treatment of ailments.\n\nDespite considerable achievements, the health situation is a matter of major concern for India. The per capita calorie consumption is much below the recommended levels and malnutrition afflicts a large percentage of our population. Safe drinking water and basic sanitation amenities are available to only one-third of the rural population. These problems need to be tackled through an appropriate population policy.\n\nThe most significant feature of the Indian population is the size of its adolescent population. It constitutes one-fifth of the total population of India. Adolescents are, generally, grouped in the age group of 10 to 19 years. They are the most important resource for the future. Nutrition requirements of adolescents are higher than those of a normal child or adult. Poor nutrition can lead to deficiency and stunted growth. But in India, the diet available to adolescents is inadequate in all nutrients. A large number of adolescent girls suffer from anaemia. Their problems have so far not received adequate attention in the process of development. The adolescent girls have to be sensitised to the problems they confront. Awareness among them can be improved through the spread of literacy and education.\n\nRecognising that the planning of families would improve individual health and welfare, the Government of India initiated a comprehensive Family Planning Programme in 1952. The Family Welfare Programme has sought to promote responsible and planned parenthood on a voluntary basis. The National Population Policy (NPP) 2000 is a culmination of years of planned efforts. \n\nThe NPP 2000 provides a policy framework for imparting free and compulsory school education up to 14 years of age, reducing infant mortality rate to below 30 per 1000 live births, achieving universal immunisation of children against all vaccine preventable diseases, promoting delayed marriage for girls, and making family welfare a people-centred programme.\n\nNPP 2000 identified adolescents as one of the major section of the population that need greater attention. Besides nutritional requirements, the policy puts greater emphasis on other important needs of adolescent including protection from unwanted pregnancies and sexually transmitted diseases (STDs). It called for programmes that aim towards encouraging delayed marriage and child-bearing, education of adolescents about the risks of unprotected sex, making contraceptive services accessible and affordable, providing food supplements, nutritional services, and strengthening legal measures to prevent child marriage. People are the nation\u2019s most valuable resource. A well-educated healthy population provides potential power.", "doc_id": "78afaa0e-4df1-11ed-bf9b-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "After land, labour is the next necessary factor for production. Farming requires a great deal of hard work. Small farmers along with their families cultivate their own fields. Thus, they provide the labour required for farming themselves. Medium and large farmers hire farm labourers to work on their fields.\n\nFarm labourers come either from landless families or families cultivating small plots of land. Unlike farmers, farm labourers do not have a right over the crops grown on the land. Instead they are paid wages by the farmer for whom they work. Wages can be in cash or in kind e.g. crop. Sometimes labourers get meals also. Wages vary widely from region to region, from crop to crop, from one farm activity to another (like sowing and harvesting). There is also a wide variation in the duration of employment. A farm labourer might be employed on a daily basis, or for one particular farm activity like harvesting, or for the whole year. \n\nDala is a landless farm labourer who works on daily wages in Palampur. This means he must regularly look for work. The minimum wages for a farm labourer set by the government is Rs 300 per day (March 2019), but Dala gets only Rs 160. There is heavy competition for work among the farm labourers in Palampur, so people agree to work for lower wages. Dala complains about his situation to Ramkali, who is another farm labourer. Both Dala and Ramkali are among the poorest people in the village.\n\nYou have already seen that the modern farming methods require a great deal of capital, so that the farmer now needs more money than before.\n1. Most small farmers have to borrow money to arrange for the capital. They borrow from large farmers or the village moneylenders or the traders who supply various inputs for cultivation. The rate of interest on such loans is very high. They are put to great distress to repay the loan.\n2. In contrast to the small farmers, the medium and large farmers have their own savings from farming. They are thus able to arrange for the capital needed. How do these farmers have their own savings? You shall find the answer in the next section.\n\nLet us suppose that the farmers have produced wheat on their lands using the three factors of production. The wheat is harvested and production is complete. What do the farmers do with the wheat? They retain a part of the wheat for the family\u2019s consumption and sell the surplus wheat. Small farmers like Savita and Gobind\u2019s sons have little surplus wheat because their total production is small and from this a substantial share is kept for their own family needs. So it is the medium and large farmers who supply wheat to the market. In the Picture 1.1, you can see the bullock cart streaming into the market each carrying loads of wheat. The traders at the market buy the wheat and sell it further to shopkeepers in the towns and cities.\n\nTejpal Singh, the large farmer, has a surplus of 350 quintals of wheat from all his lands! He sells the surplus wheat at the Raiganj market and has good earnings. What does Tejpal Singh do with his earnings? Last year, Tejpal Singh had put most of the money in his bank account. Later he used the savings for lending to farmers like Savita who were in need of a loan. He also used the savings to arrange for the working capital for farming in the next season. This year Tejpal Singh plans to use his earnings to buy another tractor. Another tractor would increase his fixed capital.\n\nLike Tejpal Singh, other large and medium farmers sell the surplus farm products. A part of the earnings is saved and kept for buying capital for the next season. Thus, they are able to arrange for the capital for farming from their own savings. Some farmers might also use the savings to buy cattle, trucks, or to set up shops. As we shall see, these constitute the capital for non-farm activities.", "doc_id": "3052831a-4df3-11ed-b9b5-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "Due to historical and cultural reasons there is a division of labour between men and women in the family. Women generally look after domestic chores and men work in the fields. Sakal\u2019s mother Sheela cooks food, cleans utensils, washes clothes, cleans the house and looks after her children. Sakal\u2019s father Buta cultivates the field, sells the produce in the market and earns money for the family. \n\nSheela is not paid for the services delivered for upbringing of the family. Buta earns money, which he spends on rearing his family. Women are not paid for their service delivered in the family. The household work done by women is not recognised in the National Income. Geeta, mother of Vilas, earned an income by selling fish. Thus women are paid for their work when they enter the labour market. Their earning like that of their male counterpart is determined on the basis of education and skill. \n\nEducation helps individual to make better use of the economic opportunities available before him. Education and skill are the major determinants of the earning of any individual in the market. A majority of women have meagre education and low skill formation. Women are paid low compared to men. Most women work where job security is not there. Various activities relating to legal protection is meagre. Employment in this sector is characterised by irregular and low income. In this sector there is an absence of basic facilities like maternity leave, childcare and other social security systems. However, women with high education and skill formation are paid at par with the men. Among the organised sector, teaching and medicine attract them the most. Some women have entered administrative and other services including job, that need high levels of scientific and technological competence. \n\nThe quality of population depends upon the literacy rate, health of a person indicated by life expectancy and skill formation acquired by the people of the country. The quality of the poulation ultimately decides the growth rate of the country. Literate and healthy population are an asset.\n\nSakal\u2019s education in the initial years of his life bore him the fruits in the later years in terms of a good job and salary. We saw education was an important input for the growth of Sakal. It opened new horizon for him, provided new aspiration and developed values of life. Not only for Sakal, education contributes towards the growth of society also. It enhances the national income, cultural richness and increases the efficiency of governance. There is a provision made for providing universal access, retention and quality in elementary education with a special emphasis on girls. There is also an establishment of pace setting of schools like Navodaya Vidyalaya in each district. Vocational streams have been developed to equip large number of high school students with occupations related to knowledge and skills. The plan outlay on education has increased from Rs 151 crore in the first plan to Rs 99,300 crore in 2020\u201321. The expenditure on education as a percentage of GDP rose from 0.64% in 1951\u201352 to 3.1% in 2019\u201320 (B.E.) and has remained stagnant around 3% from past few years. The Budgetary Estimate as stated in the Budget Documents of Union State Governments, Reserve Bank of India, the expenditure on education as a percentage of GDP has declined to 2.8% in 2020\u201321 (B.E.) The literacy rates have increased from 18% in 1951 to 85% in 2018. Literacy is not only a right, it is also needed if the citizens are to perform their duties and enjoy their rights properly. However, a vast difference is noticed across different sections of the population. Literacy among males is nearly 16.1% higher than females and it is about 14.2% higher in urban areas as compared to rural areas. As per 2011 census, literacy rates varied from 94% in Kerala to 62% in Bihar. The primary school system (I\u2013V) has expanded to over 7,78,842, lakh in 2019\u201320. Unfortunately this huge expansion of schools has been diluted by the poor quality of schooling and high dropout rates. \u201cSarva Siksha Abhiyan is a significant step towards providing elementary education to all children in the age group of 6\u201314 years by 2010... It is a time-bound initiative of the Central government, in partnership with the States, the local government and the community for achieving the goal of universalisation of elementary education.\u201d Along with it, bridge courses and back\u0002to-school camps have been initiated to increase the enrolment in elementary education. Mid-day meal scheme has been implemented to encourage attendance and retention of children and improve their nutritional status. These policies could add to the literate population of India.\n\nThe Gross Enrolment Ratio (GER) in higher education in the age group of 18 to 23 years is 27% in 2019\u201320, which would be broadly in line with world average. The strategy focuses on increasing access, quality, adoption of state-specific curriculum modification, vocationalisation and networking on the use of information technology. There is also focused on distance education, convergence of formal, non-formal, distance and IT education institutions.\n\nOver the past 60 years, there has been a significant growth in the number of university and institutions of higher learning in specialised areas. Let us read the table to see the increase in the number of college, universities, enrolment of students and recruitment of teachers from 1951 to 2019\u201320.", "doc_id": "75e74f62-4df5-11ed-85cb-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "Education helps individual to make better use of the economic opportunities available before him. Education and skill are the major determinants of the earning of any individual in the market. A majority of women have meagre education and low skill formation. Women are paid low compared to men. Most women work where job security is not there. Various activities relating to legal protection is meagre. Employment in this sector is characterised by irregular and low income. In this sector there is an absence of basic facilities like maternity leave, childcare and other social security systems. However, women with high education and skill formation are paid at par with the men. Among the organised sector, teaching and medicine attract them the most. Some women have entered administrative and other services including job, that need high levels of scientific and technological competence. \n\nThe quality of population depends upon the literacy rate, health of a person indicated by life expectancy and skill formation acquired by the people of the country. The quality of the poulation ultimately decides the growth rate of the country. Literate and healthy population are an asset.\n\nSakal\u2019s education in the initial years of his life bore him the fruits in the later years in terms of a good job and salary. We saw education was an important input for the growth of Sakal. It opened new horizon for him, provided new aspiration and developed values of life. Not only for Sakal, education contributes towards the growth of society also. It enhances the national income, cultural richness and increases the efficiency of governance. There is a provision made for providing universal access, retention and quality in elementary education with a special emphasis on girls. There is also an establishment of pace setting of schools like Navodaya Vidyalaya in each district. Vocational streams have been developed to equip large number of high school students with occupations related to knowledge and skills. The plan outlay on education has increased from Rs 151 crore in the first plan to Rs 99,300 crore in 2020\u201321. The expenditure on education as a percentage of GDP rose from 0.64% in 1951\u201352 to 3.1% in 2019\u201320 (B.E.) and has remained stagnant around 3% from past few years. The Budgetary Estimate as stated in the Budget Documents of Union State Governments, Reserve Bank of India, the expenditure on education as a percentage of GDP has declined to 2.8% in 2020\u201321 (B.E.) The literacy rates have increased from 18% in 1951 to 85% in 2018. Literacy is not only a right, it is also needed if the citizens are to perform their duties and enjoy their rights properly. However, a vast difference is noticed across different sections of the population. Literacy among males is nearly 16.1% higher than females and it is about 14.2% higher in urban areas as compared to rural areas. As per 2011 census, literacy rates varied from 94% in Kerala to 62% in Bihar. The primary school system (I\u2013V) has expanded to over 7,78,842, lakh in 2019\u201320. Unfortunately this huge expansion of schools has been diluted by the poor quality of schooling and high dropout rates. \u201cSarva Siksha Abhiyan is a significant step towards providing elementary education to all children in the age group of 6\u201314 years by 2010... It is a time-bound initiative of the Central government, in partnership with the States, the local government and the community for achieving the goal of universalisation of elementary education.\u201d Along with it, bridge courses and back\u0002to-school camps have been initiated to increase the enrolment in elementary education. Mid-day meal scheme has been implemented to encourage attendance and retention of children and improve their nutritional status. These policies could add to the literate population of India.\n\nThe Gross Enrolment Ratio (GER) in higher education in the age group of 18 to 23 years is 27% in 2019\u201320, which would be broadly in line with world average. The strategy focuses on increasing access, quality, adoption of state-specific curriculum modification, vocationalisation and networking on the use of information technology. There is also focused on distance education, convergence of formal, non-formal, distance and IT education institutions.\n\nOver the past 60 years, there has been a significant growth in the number of university and institutions of higher learning in specialised areas. Let us read the table to see the increase in the number of college, universities, enrolment of students and recruitment of teachers from 1951 to 2019\u201320.\n\nFirm maximise profit: Do you think any firm would be induced to employ people who might not work efficiently as healthy workers because of ill health? The health of a person helps him to realise his/her potential and the ability to fight illness. He/She will not be able to maximise his/her output to the overall growth of the organisation. Indeed; health is an indispensable basis for realising one\u2019s well-being. Henceforth, improvement in the health status of the population has been the priority of the country. Our national policy, too, aims at improving the accessibility of healthcare, family welfare and nutritional service with a special focus on the underprivileged segment of the population. Over the last five decades, India has built a vast health infrastructure and has also developed the manpower required at primary, secondary and tertiary sector in government, as well as, in the private sector.\n\nThere are many places in India which do not have even these basic facilities. There are only 542 medical colleges in the country and 313 dental colleges. Just four states, like Andhra Pradesh, Karnataka, Maharashtra and Tamil Nadu have the maximum number of medical colleges.", "doc_id": "17e03f22-4df6-11ed-a030-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "Since poverty has many facets, social scientists look at it through a variety of indicators. Usually the indicators used relate to the levels of income and consumption. But now poverty is looked through other social indicators like illiteracy level, lack of general resistance due to malnutrition, lack of access to healthcare, lack of job opportunities, lack of access to safe drinking water, sanitation etc. Analysis of poverty based on social exclusion and vulnerability is now becoming very common.\n\nAt the centre of the discussion on poverty is usually the concept of the \u201cpoverty line\u201d. A common method used to measure poverty is based on the income or consumption levels. A person is considered poor if his or her income or consumption level falls below a given \u201cminimum level\u201d necessary to fulfill the basic needs. What is necessary to satisfy the basic needs is different at different times and in different countries. Therefore, poverty line may vary with time and place. Each country uses an imaginary line that is considered appropriate for its existing level of development and its accepted minimum social norms. For example, a person not having a car in the United States may be considered poor. In India, owning of a car is still considered a luxury.\n\nWhile determining the poverty line in India, a minimum level of food requirement, clothing, footwear, fuel and light, educational and medical requirement, etc., are determined for subsistence. These physical quantities are multiplied by their prices in rupees. The present formula for food requirement while estimating the poverty line is based on the desired calorie requirement. Food items, such as cereals, pulses, vegetable, milk, oil, sugar, etc., together provide these needed calories. The calorie needs vary depending on age, sex and the type of work that a person does. The accepted average calorie requirement in India is 2400 calories per person per day in rural areas and 2100 calories per person per day in urban areas. Since people living in rural areas engage themselves in more physical work, calorie requirements in rural areas are considered to be higher than in urban areas. The monetary expenditure per capita needed for buying these calorie requirements in terms of food grains, etc., is revised periodically taking into consideration the rise in prices.\n\nOn the basis of these calculations, for the year 2011\u201312, the poverty line for a person was fixed at Rs 816 per month for rural areas and Rs 1000 for urban areas. Despite less calorie requirement, the higher amount for urban areas has been fixed because of high prices of many essential products in urban centres. In this way in the year 2011-12, a family of five members living in rural areas and earning less than about Rs 4,080 per month will be below the poverty line. A similar family in the urban areas would need a minimum of Rs 5,000 per month to meet their basic requirements. The poverty line is estimated periodically (normally every five years) by conducting sample surveys. These surveys are carried out by the National Sample Survey Organisation (NSSO). However, for making comparisons between developing countries, many international organisations like the World Bank use a uniform standard for the poverty line: minimum availability of the equivalent of $1.90 per person per day.\n\n\nIt is clear from Table 3.1 that there is a substantial decline in poverty ratios in India from about 45 per cent in 1993-94 to 37.2 per cent in 2004\u201305. The proportion of people below poverty line further came down to about 22 per cent in 2011\u201312. If the trend continues, people below poverty line may come down to less than 20 per cent in the next few years. Although the percentage of people living under poverty declined in the earlier two decades (1973\u20131993), the number of poor declined from 407 million in 2004\u201305 to 270 million in 2011\u201312 with an average annual decline of 2.2 percentage points during 2004\u201305 to 2011\u201312.\n\nThe proportion of people below poverty line is also not same for all social groups and economic categories in India. Social groups, which are most vulnerable to poverty are Scheduled Caste and Scheduled Tribe households. Similarly, among the economic groups, the most vulnerable groups are the rural agricultural labour households and the urban casual labour households. Graph 3.1 shows the percentage of poor people in all these groups. Although the average for people below poverty line for all groups in India is 22, 43 out of 100 people belonging to Scheduled Tribes are not able to meet their basic needs. Similarly, 34 per cent of casual workers in urban areas are below poverty line. About 34 per cent of casual labour farm (in rural areas) and 29 per cent of Scheduled Castes are also poor. The double disadvantage of being a landless casual wage labour household in the socially disadvantaged social groups of the scheduled caste or the scheduled tribe population highlights the seriousness of the problem. Some recent studies have shown that except for the scheduled tribe households, all the other three groups (i.e. scheduled castes, rural agricultural labourers and the urban casual labour households) have seen a decline in poverty in the 1990s.", "doc_id": "acca4f04-4df8-11ed-8954-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "Since poverty has many facets, social scientists look at it through a variety of indicators. Usually the indicators used relate to the levels of income and consumption. But now poverty is looked through other social indicators like illiteracy level, lack of general resistance due to malnutrition, lack of access to healthcare, lack of job opportunities, lack of access to safe drinking water, sanitation etc. Analysis of poverty based on social exclusion and vulnerability is now becoming very common.\n\nAt the centre of the discussion on poverty is usually the concept of the \u201cpoverty line\u201d. A common method used to measure poverty is based on the income or consumption levels. A person is considered poor if his or her income or consumption level falls below a given \u201cminimum level\u201d necessary to fulfill the basic needs. What is necessary to satisfy the basic needs is different at different times and in different countries. Therefore, poverty line may vary with time and place. Each country uses an imaginary line that is considered appropriate for its existing level of development and its accepted minimum social norms. For example, a person not having a car in the United States may be considered poor. In India, owning of a car is still considered a luxury.\n\nWhile determining the poverty line in India, a minimum level of food requirement, clothing, footwear, fuel and light, educational and medical requirement, etc., are determined for subsistence. These physical quantities are multiplied by their prices in rupees. The present formula for food requirement while estimating the poverty line is based on the desired calorie requirement. Food items, such as cereals, pulses, vegetable, milk, oil, sugar, etc., together provide these needed calories. The calorie needs vary depending on age, sex and the type of work that a person does. The accepted average calorie requirement in India is 2400 calories per person per day in rural areas and 2100 calories per person per day in urban areas. Since people living in rural areas engage themselves in more physical work, calorie requirements in rural areas are considered to be higher than in urban areas. The monetary expenditure per capita needed for buying these calorie requirements in terms of food grains, etc., is revised periodically taking into consideration the rise in prices.\n\nOn the basis of these calculations, for the year 2011\u201312, the poverty line for a person was fixed at Rs 816 per month for rural areas and Rs 1000 for urban areas. Despite less calorie requirement, the higher amount for urban areas has been fixed because of high prices of many essential products in urban centres. In this way in the year 2011-12, a family of five members living in rural areas and earning less than about Rs 4,080 per month will be below the poverty line. A similar family in the urban areas would need a minimum of Rs 5,000 per month to meet their basic requirements. The poverty line is estimated periodically (normally every five years) by conducting sample surveys. These surveys are carried out by the National Sample Survey Organisation (NSSO). However, for making comparisons between developing countries, many international organisations like the World Bank use a uniform standard for the poverty line: minimum availability of the equivalent of $1.90 per person per day.\n\n\nIt is clear from Table 3.1 that there is a substantial decline in poverty ratios in India from about 45 per cent in 1993-94 to 37.2 per cent in 2004\u201305. The proportion of people below poverty line further came down to about 22 per cent in 2011\u201312. If the trend continues, people below poverty line may come down to less than 20 per cent in the next few years. Although the percentage of people living under poverty declined in the earlier two decades (1973\u20131993), the number of poor declined from 407 million in 2004\u201305 to 270 million in 2011\u201312 with an average annual decline of 2.2 percentage points during 2004\u201305 to 2011\u201312.\n\nThe proportion of people below poverty line is also not same for all social groups and economic categories in India. Social groups, which are most vulnerable to poverty are Scheduled Caste and Scheduled Tribe households. Similarly, among the economic groups, the most vulnerable groups are the rural agricultural labour households and the urban casual labour households. Graph 3.1 shows the percentage of poor people in all these groups. Although the average for people below poverty line for all groups in India is 22, 43 out of 100 people belonging to Scheduled Tribes are not able to meet their basic needs. Similarly, 34 per cent of casual workers in urban areas are below poverty line. About 34 per cent of casual labour farm (in rural areas) and 29 per cent of Scheduled Castes are also poor. The double disadvantage of being a landless casual wage labour household in the socially disadvantaged social groups of the scheduled caste or the scheduled tribe population highlights the seriousness of the problem. Some recent studies have shown that except for the scheduled tribe households, all the other three groups (i.e. scheduled castes, rural agricultural labourers and the urban casual labour households) have seen a decline in poverty in the 1990s.", "doc_id": "c0b9803e-4df8-11ed-9cdc-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "There were a number of causes for the widespread poverty in India. One historical reason is the low level of economic development under the British colonial administration. The policies of the colonial government ruined traditional handicrafts and discouraged development of industries like textiles. The low rate of growth persisted until the nineteen\u0002eighties. This resulted in less job opportunities and low growth rate of incomes. This was accompanied by a high growth rate of population. The two combined to make the growth rate of per capita income very low. The failure at both the fronts: promotion of economic growth and population control perpetuated the cycle of poverty.\n\nWith the spread of irrigation and the Green revolution, many job opportunities were created in the agriculture sector. But the effects were limited to some parts of India. The industries, both in the public and the private sector, did provide some jobs. But these were not enough to absorb all the job seekers. Unable to find proper jobs in cities, many people started working as rickshaw pullers, vendors, construction workers, domestic servants etc. With irregular small incomes, these people could not afford expensive housing. They started living in slums on the outskirts of the cities and the problems of poverty, largely a rural phenomenon also became the feature of the urban sector.\n\nAnother feature of high poverty rates has been the huge income inequalities. One of the major reasons for this is the unequal distribution of land and other resources. Despite many policies, we have not been able to tackle the issue in a meaningful manner. Major policy initiatives like land reforms which aimed at redistribution of assets in rural areas have not been implemented properly and effectively by most of the state governments. Since lack of land resources has been one of the major causes of poverty in India, proper implementation of policy could have improved the life of millions of rural poor.\n\nMany other socio-cultural and economic factors also are responsible for poverty. In order to fulfil social obligations and observe religious ceremonies, people in India, including the very poor, spend a lot of money. Small farmers need money to buy agricultural inputs like seeds, fertilizer, pesticides etc. Since poor people hardly have any savings, they borrow. Unable to repay because of poverty, they become victims of indebtedness. So the high level of indebtedness is both the cause and effect of poverty.\n\nRemoval of poverty has been one of the major objectives of Indian developmental strategy. The current anti-poverty strategy of the government is based broadly on two planks (1) promotion of economic growth (2) targeted anti-poverty programmes.\n\nOver a period of thirty years lasting up to the early eighties, there were little per capita income growth and not much reduction in poverty. Official poverty estimates which were about 45 per cent in the early 1950s remained the same even in the early eighties. Since the eighties, India\u2019s economic growth has been one of the fastest in the world. The growth rate jumped from the average of about 3.5 per cent a year in the 1970s to about 6 per cent during the 1980s and 1990s. The higher growth rates have helped significantly in the reduction of poverty. Therefore, it is becoming clear that there is a strong link between economic growth and poverty reduction. Economic growth widens opportunities and provides the resources needed to invest in human development. This also encourages people to send their children, including the girl child, to schools in the hope of getting better economic returns from investing in education. However, the poor may not be able to take direct advantage from the opportunities created by economic growth. Moreover, growth in the agriculture sector is much below expectations. This has a direct bearing on poverty as a large number of poor people live in villages and are dependent on agriculture.\n\nIn these circumstances, there is a clear need for targeted anti-poverty programmes. Although there are so many schemes which are formulated to affect poverty directly or indirectly, some of them are worth mentioning. Mahatma Gandhi National Rural Employment Guarantee Act, 2005 aims to provide 100 days of wage employment to every household to ensure livelihood security in rural areas. It also aimed at sustainable development to address the cause of draught, deforestration and soil erosion. One-third of the proposed jobs have been reserved for women. The scheme provided employment to 220 crores person days of employment to 4.78 crore households. The share of SC, ST, Women person days in the scheme are 23 per cent, 17 per cent and 53 per cent respectively. The average wage has increased from 65 in 2006\u201307 to 132 in 2013\u201314. Recently, in March 2018, the wage rate for unskilled manual workers has been revised, state wise, the range of wage rate for different states and union territories lies in between Rs 281 per day (for the workers in Haryana) to Rs 168 per day (for the workers of Bihar and Jharkhand).\n\nPrime Minister Rozgar Yozana (PMRY) is another scheme which was started in 1993. The aim of the programme is to create self-employment opportunities for educated unemployed youth in rural areas and small towns. They are helped in setting up small business and industries. Rural Employment Generation Programme (REGP) was launched in 1995. The aim of the programme is to create self\u0002employment opportunities in rural areas and small towns. A target for creating 25 lakh new jobs has been set for the programme under the Tenth Five Year plan. Swarnajayanti Gram Swarozgar Yojana (SGSY) was launched in 1999. The programme aims at bringing the assisted poor families above the poverty line by organising them into self help groups through a mix of bank credit and government subsidy. Under the Pradhan Mantri Gramodaya Yozana (PMGY) launched in 2000, additional central assistance is given to states for basic services such as primary health, primary education, rural shelter, rural drinking water and rural electrification. Another important scheme is Antyodaya Anna Yozana (AAY) about which you will be reading more in the next chapter.\n\nThe results of these programmes have been mixed. One of the major reasons for less effectiveness is the lack of proper implementation and right targeting. Moreover, there has been a lot of overlapping of schemes. Despite good intentions, the benefits of these schemes are not fully reached to the deserving poor. Therefore, the major emphasis in recent years is on proper monitoring of all the poverty alleviation programmes.", "doc_id": "0df37070-4dfa-11ed-b9a1-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "There were a number of causes for the widespread poverty in India. One historical reason is the low level of economic development under the British colonial administration. The policies of the colonial government ruined traditional handicrafts and discouraged development of industries like textiles. The low rate of growth persisted until the nineteen\u0002eighties. This resulted in less job opportunities and low growth rate of incomes. This was accompanied by a high growth rate of population. The two combined to make the growth rate of per capita income very low. The failure at both the fronts: promotion of economic growth and population control perpetuated the cycle of poverty.\n\nWith the spread of irrigation and the Green revolution, many job opportunities were created in the agriculture sector. But the effects were limited to some parts of India. The industries, both in the public and the private sector, did provide some jobs. But these were not enough to absorb all the job seekers. Unable to find proper jobs in cities, many people started working as rickshaw pullers, vendors, construction workers, domestic servants etc. With irregular small incomes, these people could not afford expensive housing. They started living in slums on the outskirts of the cities and the problems of poverty, largely a rural phenomenon also became the feature of the urban sector.\n\nAnother feature of high poverty rates has been the huge income inequalities. One of the major reasons for this is the unequal distribution of land and other resources. Despite many policies, we have not been able to tackle the issue in a meaningful manner. Major policy initiatives like land reforms which aimed at redistribution of assets in rural areas have not been implemented properly and effectively by most of the state governments. Since lack of land resources has been one of the major causes of poverty in India, proper implementation of policy could have improved the life of millions of rural poor.\n\nMany other socio-cultural and economic factors also are responsible for poverty. In order to fulfil social obligations and observe religious ceremonies, people in India, including the very poor, spend a lot of money. Small farmers need money to buy agricultural inputs like seeds, fertilizer, pesticides etc. Since poor people hardly have any savings, they borrow. Unable to repay because of poverty, they become victims of indebtedness. So the high level of indebtedness is both the cause and effect of poverty.\n\nRemoval of poverty has been one of the major objectives of Indian developmental strategy. The current anti-poverty strategy of the government is based broadly on two planks (1) promotion of economic growth (2) targeted anti-poverty programmes.\n\nOver a period of thirty years lasting up to the early eighties, there were little per capita income growth and not much reduction in poverty. Official poverty estimates which were about 45 per cent in the early 1950s remained the same even in the early eighties. Since the eighties, India\u2019s economic growth has been one of the fastest in the world. The growth rate jumped from the average of about 3.5 per cent a year in the 1970s to about 6 per cent during the 1980s and 1990s. The higher growth rates have helped significantly in the reduction of poverty. Therefore, it is becoming clear that there is a strong link between economic growth and poverty reduction. Economic growth widens opportunities and provides the resources needed to invest in human development. This also encourages people to send their children, including the girl child, to schools in the hope of getting better economic returns from investing in education. However, the poor may not be able to take direct advantage from the opportunities created by economic growth. Moreover, growth in the agriculture sector is much below expectations. This has a direct bearing on poverty as a large number of poor people live in villages and are dependent on agriculture.\n\nIn these circumstances, there is a clear need for targeted anti-poverty programmes. Although there are so many schemes which are formulated to affect poverty directly or indirectly, some of them are worth mentioning. Mahatma Gandhi National Rural Employment Guarantee Act, 2005 aims to provide 100 days of wage employment to every household to ensure livelihood security in rural areas. It also aimed at sustainable development to address the cause of draught, deforestration and soil erosion. One-third of the proposed jobs have been reserved for women. The scheme provided employment to 220 crores person days of employment to 4.78 crore households. The share of SC, ST, Women person days in the scheme are 23 per cent, 17 per cent and 53 per cent respectively. The average wage has increased from 65 in 2006\u201307 to 132 in 2013\u201314. Recently, in March 2018, the wage rate for unskilled manual workers has been revised, state wise, the range of wage rate for different states and union territories lies in between Rs 281 per day (for the workers in Haryana) to Rs 168 per day (for the workers of Bihar and Jharkhand).\n\nPrime Minister Rozgar Yozana (PMRY) is another scheme which was started in 1993. The aim of the programme is to create self-employment opportunities for educated unemployed youth in rural areas and small towns. They are helped in setting up small business and industries. Rural Employment Generation Programme (REGP) was launched in 1995. The aim of the programme is to create self\u0002employment opportunities in rural areas and small towns. A target for creating 25 lakh new jobs has been set for the programme under the Tenth Five Year plan. Swarnajayanti Gram Swarozgar Yojana (SGSY) was launched in 1999. The programme aims at bringing the assisted poor families above the poverty line by organising them into self help groups through a mix of bank credit and government subsidy. Under the Pradhan Mantri Gramodaya Yozana (PMGY) launched in 2000, additional central assistance is given to states for basic services such as primary health, primary education, rural shelter, rural drinking water and rural electrification. Another important scheme is Antyodaya Anna Yozana (AAY) about which you will be reading more in the next chapter.\n\nThe results of these programmes have been mixed. One of the major reasons for less effectiveness is the lack of proper implementation and right targeting. Moreover, there has been a lot of overlapping of schemes. Despite good intentions, the benefits of these schemes are not fully reached to the deserving poor. Therefore, the major emphasis in recent years is on proper monitoring of all the poverty alleviation programmes.", "doc_id": "36ad2434-4dfa-11ed-9f17-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "The food insecure people are disproportionately large in some regions of the country, such as economically backward states with high incidence of poverty, tribal and remote areas, regions more prone to natural disasters etc. In fact, the states of Uttar Pradesh (eastern and south-eastern parts), Bihar, Jharkhand, Orissa, West Bengal, Chattisgarh, parts of Madhya Pradesh and Maharashtra account for largest number of food insecure people in the country. \n\nHunger is another aspect indicating food insecurity. Hunger is not just an expression of poverty, it brings about poverty. The attainment of food security therefore involves eliminating current hunger and reducing the risks of future hunger. Hunger has chronic and seasonal dimensions. Chronic hunger is a consequence of diets persistently inadequate in terms of quantity and/or quality. Poor people suffer from chronic hunger because of their very low income and in turn inability to buy food even for survival. Seasonal hunger is related to cycles of food growing and harvesting. This is prevalent in rural areas because of the seasonal nature of agricultural activities and in urban areas because of casual labourers, e.g., there is less work for casual construction labourers during the rainy season. This type of hunger exists when a person is unable to get work for the entire year.\n\nThe percentage of seasonal, as well as, chronic hunger has declined in India as shown in the above table. India is aiming at Self-sufficiency in Foodgrains since Independence. After Independence, Indian policy\u0002makers adopted all measures to achieve self-sufficiency in food grains. India adopted a new strategy in agriculture, which resulted in \u2018Green Revolution\u2019, especially in the production of wheat and rice.\n\nIndira Gandhi, the then Prime Minister of India, officially recorded the impressive strides of Green Revolution in agriculture by releasing a special stamp entitled \u2018Wheat Revolution\u2019 in July 1968. The success of wheat was later replicated in rice. The increase in foodgrains was, however, disproportionate. The highest rate of growth was achieved in Uttar Pradesh and Madhya Pradesh, which was 44.01 and 30.21 million tonnes in 2015\u201316. The total foodgrain production was 252.22 Million tonnes in 2015\u201316 and it has changed to 275.68 million tonnes in 2016\u201317. Uttar Pradesh and Madhya Pradesh recorded a significant production in field of wheat which was 26.87 and 17.69 million tonnes in 2015\u201316, respectively. West Bengal and UP, on the other hand, recorded significant production of rice 15.75 and 12.51 Million tonnes in 2015\u201316 respectively.\n\nSince the advent of the Green Revolution in the early-1970s, the country has avoided famine even during adverse weather conditions. India has become self-sufficient in foodgrains during the last 30 years because of a variety of crops grown all over the country. The availability of foodgrains (even in adverse weather conditions or otherwise) at the country level has further been ensured with a carefully designed food security system by the government. This system has two components: (a) buffer stock, and (b) public distribution system.\n\nBuffer Stock is the stock of foodgrains, namely wheat and rice, procured by the government through the Food Corporation of India (FCI). The FCI purchases wheat and rice from the farmers in states where there is surplus production. The farmers are paid a pre-announced price for their crops. This price is called Minimum Support Price (MSP). The MSP is declared by the government every year before the sowing season to provide incentives to farmers for raising the production of these crops. The purchased foodgrains are stored in granaries. Do you know why this buffer stock is created by the government? This is done to distribute foodgrains in the deficit areas and among the poorer strata of the society at a price lower than the market price also known as Issue Price. This also helps resolve the problem of shortage of food during adverse weather conditions or during the periods of calamity.\n\nThe food procured by the FCI is distributed through government regulated ration shops among the poorer section of the society. This is called the Public Distribution System (PDS). Ration shops are now present in most localities, villages, towns and cities. There are about 5.5 lakh ration shops all over the country. Ration shops also, known as Fair Price Shops, keep stock of foodgrains, sugar, and kerosene for cooking. These items are sold to people at a price lower than the market price. Any family with a ration card* can buy a stipulated amount of these items (e.g. 35 kg of grains, 5 litres of kerosene, 5 kgs of sugar etc.) every month from the nearby ration shop.", "doc_id": "01ffe598-4dfd-11ed-9e53-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "The food insecure people are disproportionately large in some regions of the country, such as economically backward states with high incidence of poverty, tribal and remote areas, regions more prone to natural disasters etc. In fact, the states of Uttar Pradesh (eastern and south-eastern parts), Bihar, Jharkhand, Orissa, West Bengal, Chattisgarh, parts of Madhya Pradesh and Maharashtra account for largest number of food insecure people in the country. \n\nHunger is another aspect indicating food insecurity. Hunger is not just an expression of poverty, it brings about poverty. The attainment of food security therefore involves eliminating current hunger and reducing the risks of future hunger. Hunger has chronic and seasonal dimensions. Chronic hunger is a consequence of diets persistently inadequate in terms of quantity and/or quality. Poor people suffer from chronic hunger because of their very low income and in turn inability to buy food even for survival. Seasonal hunger is related to cycles of food growing and harvesting. This is prevalent in rural areas because of the seasonal nature of agricultural activities and in urban areas because of casual labourers, e.g., there is less work for casual construction labourers during the rainy season. This type of hunger exists when a person is unable to get work for the entire year.\n\nThe percentage of seasonal, as well as, chronic hunger has declined in India as shown in the above table. India is aiming at Self-sufficiency in Foodgrains since Independence. After Independence, Indian policy\u0002makers adopted all measures to achieve self-sufficiency in food grains. India adopted a new strategy in agriculture, which resulted in \u2018Green Revolution\u2019, especially in the production of wheat and rice.\n\nIndira Gandhi, the then Prime Minister of India, officially recorded the impressive strides of Green Revolution in agriculture by releasing a special stamp entitled \u2018Wheat Revolution\u2019 in July 1968. The success of wheat was later replicated in rice. The increase in foodgrains was, however, disproportionate. The highest rate of growth was achieved in Uttar Pradesh and Madhya Pradesh, which was 44.01 and 30.21 million tonnes in 2015\u201316. The total foodgrain production was 252.22 Million tonnes in 2015\u201316 and it has changed to 275.68 million tonnes in 2016\u201317. Uttar Pradesh and Madhya Pradesh recorded a significant production in field of wheat which was 26.87 and 17.69 million tonnes in 2015\u201316, respectively. West Bengal and UP, on the other hand, recorded significant production of rice 15.75 and 12.51 Million tonnes in 2015\u201316 respectively.\n\nSince the advent of the Green Revolution in the early-1970s, the country has avoided famine even during adverse weather conditions. India has become self-sufficient in foodgrains during the last 30 years because of a variety of crops grown all over the country. The availability of foodgrains (even in adverse weather conditions or otherwise) at the country level has further been ensured with a carefully designed food security system by the government. This system has two components: (a) buffer stock, and (b) public distribution system.\n\nBuffer Stock is the stock of foodgrains, namely wheat and rice, procured by the government through the Food Corporation of India (FCI). The FCI purchases wheat and rice from the farmers in states where there is surplus production. The farmers are paid a pre-announced price for their crops. This price is called Minimum Support Price (MSP). The MSP is declared by the government every year before the sowing season to provide incentives to farmers for raising the production of these crops. The purchased foodgrains are stored in granaries. Do you know why this buffer stock is created by the government? This is done to distribute foodgrains in the deficit areas and among the poorer strata of the society at a price lower than the market price also known as Issue Price. This also helps resolve the problem of shortage of food during adverse weather conditions or during the periods of calamity.\n\nThe food procured by the FCI is distributed through government regulated ration shops among the poorer section of the society. This is called the Public Distribution System (PDS). Ration shops are now present in most localities, villages, towns and cities. There are about 5.5 lakh ration shops all over the country. Ration shops also, known as Fair Price Shops, keep stock of foodgrains, sugar, and kerosene for cooking. These items are sold to people at a price lower than the market price. Any family with a ration card* can buy a stipulated amount of these items (e.g. 35 kg of grains, 5 litres of kerosene, 5 kgs of sugar etc.) every month from the nearby ration shop.", "doc_id": "7ccb6162-4dfd-11ed-bc7d-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "The food insecure people are disproportionately large in some regions of the country, such as economically backward states with high incidence of poverty, tribal and remote areas, regions more prone to natural disasters etc. In fact, the states of Uttar Pradesh (eastern and south-eastern parts), Bihar, Jharkhand, Orissa, West Bengal, Chattisgarh, parts of Madhya Pradesh and Maharashtra account for largest number of food insecure people in the country. \n\nHunger is another aspect indicating food insecurity. Hunger is not just an expression of poverty, it brings about poverty. The attainment of food security therefore involves eliminating current hunger and reducing the risks of future hunger. Hunger has chronic and seasonal dimensions. Chronic hunger is a consequence of diets persistently inadequate in terms of quantity and/or quality. Poor people suffer from chronic hunger because of their very low income and in turn inability to buy food even for survival. Seasonal hunger is related to cycles of food growing and harvesting. This is prevalent in rural areas because of the seasonal nature of agricultural activities and in urban areas because of casual labourers, e.g., there is less work for casual construction labourers during the rainy season. This type of hunger exists when a person is unable to get work for the entire year.\n\nThe percentage of seasonal, as well as, chronic hunger has declined in India as shown in the above table. India is aiming at Self-sufficiency in Foodgrains since Independence. After Independence, Indian policy\u0002makers adopted all measures to achieve self-sufficiency in food grains. India adopted a new strategy in agriculture, which resulted in \u2018Green Revolution\u2019, especially in the production of wheat and rice.\n\nIndira Gandhi, the then Prime Minister of India, officially recorded the impressive strides of Green Revolution in agriculture by releasing a special stamp entitled \u2018Wheat Revolution\u2019 in July 1968. The success of wheat was later replicated in rice. The increase in foodgrains was, however, disproportionate. The highest rate of growth was achieved in Uttar Pradesh and Madhya Pradesh, which was 44.01 and 30.21 million tonnes in 2015\u201316. The total foodgrain production was 252.22 Million tonnes in 2015\u201316 and it has changed to 275.68 million tonnes in 2016\u201317. Uttar Pradesh and Madhya Pradesh recorded a significant production in field of wheat which was 26.87 and 17.69 million tonnes in 2015\u201316, respectively. West Bengal and UP, on the other hand, recorded significant production of rice 15.75 and 12.51 Million tonnes in 2015\u201316 respectively.\n\nSince the advent of the Green Revolution in the early-1970s, the country has avoided famine even during adverse weather conditions. India has become self-sufficient in foodgrains during the last 30 years because of a variety of crops grown all over the country. The availability of foodgrains (even in adverse weather conditions or otherwise) at the country level has further been ensured with a carefully designed food security system by the government. This system has two components: (a) buffer stock, and (b) public distribution system.\n\nBuffer Stock is the stock of foodgrains, namely wheat and rice, procured by the government through the Food Corporation of India (FCI). The FCI purchases wheat and rice from the farmers in states where there is surplus production. The farmers are paid a pre-announced price for their crops. This price is called Minimum Support Price (MSP). The MSP is declared by the government every year before the sowing season to provide incentives to farmers for raising the production of these crops. The purchased foodgrains are stored in granaries. Do you know why this buffer stock is created by the government? This is done to distribute foodgrains in the deficit areas and among the poorer strata of the society at a price lower than the market price also known as Issue Price. This also helps resolve the problem of shortage of food during adverse weather conditions or during the periods of calamity.\n\nThe food procured by the FCI is distributed through government regulated ration shops among the poorer section of the society. This is called the Public Distribution System (PDS). Ration shops are now present in most localities, villages, towns and cities. There are about 5.5 lakh ration shops all over the country. Ration shops also, known as Fair Price Shops, keep stock of foodgrains, sugar, and kerosene for cooking. These items are sold to people at a price lower than the market price. Any family with a ration card* can buy a stipulated amount of these items (e.g. 35 kg of grains, 5 litres of kerosene, 5 kgs of sugar etc.) every month from the nearby ration shop.", "doc_id": "b5cbc31c-4dfd-11ed-aabd-0242ac110007"} {"source": "NCERT IX Social Science, India", "document": "Buffer Stock is the stock of foodgrains, namely wheat and rice, procured by the government through the Food Corporation of India (FCI). The FCI purchases wheat and rice from the farmers in states where there is surplus production. The farmers are paid a pre-announced price for their crops. This price is called Minimum Support Price (MSP). The MSP is declared by the government every year before the sowing season to provide incentives to farmers for raising the production of these crops. The purchased foodgrains are stored in granaries. Do you know why this buffer stock is created by the government? This is done to distribute foodgrains in the deficit areas and among the poorer strata of the society at a price lower than the market price also known as Issue Price. This also helps resolve the problem of shortage of food during adverse weather conditions or during the periods of calamity.\n\nThe food procured by the FCI is distributed through government regulated ration shops among the poorer section of the society. This is called the Public Distribution System (PDS). Ration shops are now present in most localities, villages, towns and cities. There are about 5.5 lakh ration shops all over the country. Ration shops also, known as Fair Price Shops, keep stock of foodgrains, sugar, and kerosene for cooking. These items are sold to people at a price lower than the market price. Any family with a ration card* can buy a stipulated amount of these items (e.g. 35 kg of grains, 5 litres of kerosene, 5 kgs of sugar etc.) every month from the nearby ration shop.\n\nPublic Distribution System (PDS) is the most important step taken by the Government of India (GoI) towards ensuring food security. In the beginning, the coverage of PDS was universal with no discrimination between the poor and the non-poor. Over the years, the policy related to PDS has been revised to make it more efficient and targeted. In 1992, Revamped Public Distribution System (RPDS) was introducted in 1,700 blocks in the country. The target was to provide the benefits of PDS to remote and backward areas. From June 1997, in a renewed attempt, Targeted Public Distribution System (TPDS) was introducted to adopt the principle of targeting the \u2018poor in all areas\u2019. It was for the first time that a differential price policy was adopted for poor and non\u0002poor. Further, in 2000, two special schemes were launched viz., Antyodaya Anna Yojana (AAY) and Annapurna Scheme (APS) with special target groups of \u2018poorest of the poor\u2019 and \u2018indigent senior citizens\u2019, respectively. The functioning of these two schemes was linked with the existing network of the PDS.\n\nThe PDS has proved to be the most effective instrument of government policy over the years in stabilising prices and making food available to consumers at affordable prices. It has been instrumental in averting widespread hunger and famine by supplying food from surplus regions of the country to the deficit ones. In addition, the prices have been under revision in favour of poor households in general. The system, including the minimum support price and procurement has contributed to an increase in foodgrain production and provided income security to farmers in certain regions. \n\nHowever, the Public Distribution System has faced severe criticism on several grounds. Instances of hunger are prevalent despite overflowing granaries. FCI godowns are overflowing with grains, with some rotting away and some being eaten by rats. Graph 4.2 shows the difference in foodgrain stocks in Central pool and its stocking norms.\n\nIn 2014, the stock of wheat and rice with FCI was 65.3 million tonnes which was much more than the minimum buffer norms. However, these remained consistently higher than the buffer norms. The situation improved with the distribution of foodgrains under different schemes launched by the government. There is a general consensus that high level of buffer stocks of foodgrains is very undesirable and can be wasteful. The storage of massive food stocks has been responsible for high carrying costs, in addition to wastage and deterioration in grain quality. Freezing of MSP for a few years should be considered seriously.\n\nThe increased food grains procurement at enhanced MSP is the result of the pressure exerted by leading foodgrain producing states, such as Punjab, Haryana and Andhra Pradesh. Moreover, as the procurement is concentrated in a few prosperous regions (Punjab, Haryana, Western Uttar Pradesh, Andhra Pradesh and to a lesser extent in West Bengal) and mainly of two crops\u2014 wheat and rice\u2014 increase in MSP has induced farmers, particularly in surplus states, to divert land from production of coarse grains, which is the staple food of the poor, to the production of rice and wheat. The intensive utilisation of water in the cultivation of rice has also led to environmental degradation and fall in the water level, threatening the sustainability of the agricultural development in these states.\n\nAs per the NSSO report No. 558 in rural India, the per person per month consumption of rice has declined from 6.38 Kg. in 2004-05 to 5.98 Kg in 2011-12. In urban India, the per person per month consumption of rice, too has declined from 4.71 Kg in 2004-05 to 4.49 Kg in 2011-12. Per Capita consumption of PDS rice has doubled in rural India and increased by 66% in urban India since 2004-05. The per Capita consumption of PDS wheat has doubled since 2004-05 in both rural and urban India.\n\nPDS dealers are sometimes found resorting to malpractices like diverting the grains to open market to get better margin, selling poor quality grains at ration shops, irregular opening of the shops, etc. It is common to find that ration shops regularly have unsold stocks of poor quality grains left. This has proved to be a big problem. When ration shops are unable to sell, a massive stock of foodgrains piles up with the FCI. In recent years, there is another factor that has led to the decline of the PDS. Earlier every family, poor and non-poor had a ration card with a fixed quota of items such as rice, wheat, sugar etc. These were sold at the same low price to every family. The three types of cards and the range of prices that you see today did not exist. A large number of families could buy foodgrains from the ration shops subject to a fixed quota. These included low income families whose incomes were marginally higher than the below poverty line families. Now, with TPDS of three different prices, any family above the poverty line gets very little discount at the ration shop. The price for APL family is almost as high as open market price, so there is little incentive for them to buy these items from the ration shop.", "doc_id": "4c39120a-4e03-11ed-86c5-0242ac110007"} {"source": "NCERT X Science, India", "document": "Activity 1.1 can be described as \u2013 when a magnesium ribbon is burnt in oxygen, it gets converted to magnesium oxide. This description of a chemical reaction in a sentence form is quite long. It can be written in a shorter form. The simplest way to do this is to write it in the form of a word-equation.\n\nThe substances that undergo chemical change in the reaction (1.1), magnesium and oxygen, are the reactants. The new substance is magnesium oxide, formed during the reaction, as a product. A word-equation shows change of reactants to products through an arrow placed between them. The reactants are written on the left-hand side (LHS) with a plus sign (+) between them. Similarly, products are written on the right-hand side (RHS) with a plus sign (+) between them. The arrowhead points towards the products, and shows the direction of the reaction.\n\nIs there any other shorter way for representing chemical equations? Chemical equations can be made more concise and useful if we use chemical formulae instead of words. A chemical equation represents a chemical reaction.\n\nCount and compare the number of atoms of each element on the LHS and RHS of the arrow. Is the number of atoms of each element the same on both the sides? If yes, then the equation is balanced. If not, then the equation is unbalanced because the mass is not the same on both sides of the equation. Such a chemical equation is a skeletal chemical equation for a reaction. Equation (1.2) is a skeletal chemical equation for the burning of magnesium in air.\n\nRecall the law of conservation of mass that you studied in Class IX; mass can neither be created nor destroyed in a chemical reaction. That is, the total mass of the elements present in the products of a chemical reaction has to be equal to the total mass of the elements present in the reactants. In other words, the number of atoms of each element remains the same, before and after a chemical reaction. Hence, we need to balance a skeletal chemical equation. Is the chemical Eq. (1.2) balanced? Let us learn about balancing a chemical equation step by step.\n\nStep I: To balance a chemical equation, first draw boxes around each formula. Do not change anything inside the boxes while balancing the equation.\n\nStep II: List the number of atoms of different elements present in the unbalanced equation.\n\nStep III: It is often convenient to start balancing with the compound that contains the maximum number of atoms. It may be a reactant or a product. In that compound, select the element which has the maximum number of atoms. Using these criteria, we select Fe3O4 and the element oxygen in it. There are four oxygen atoms on the RHS and only one on the LHS. To equalise the number of atoms, it must be remembered that we cannot alter the formulae of the compounds or elements involved in the reactions.\n\nStep IV: Fe and H atoms are still not balanced. Pick any of these elements to proceed further. Let us balance hydrogen atoms in the partly balanced equation.\n\nStep V: Examine the above equation and pick up the third element which is not balanced. You find that only one element is left to be balanced, that is, iron.\n\nStep VI: Finally, to check the correctness of the balanced equation, we count atoms of each element on both sides of the equation. The numbers of atoms of elements on both sides of Eq. (1.9) are equal. This equation is now balanced. This method of balancing chemical equations is called hit-and-trial method as we make trials to balance the equation by using the smallest whole number coefficient.\n\nStep VII: Writing Symbols of Physical States Carefully examine the above balanced Eq. (1.9). Does this equation tell us anything about the physical state of each reactant and product? No information has been given in this equation about their physical states. To make a chemical equation more informative, the physical states of the reactants and products are mentioned along with their chemical formulae. The gaseous, liquid, aqueous and solid states of reactants and products are represented by the notations (g), (l), (aq) and (s), respectively. The word aqueous (aq) is written if the reactant or product is present as a solution in water.", "doc_id": "97c37226-4e06-11ed-b9da-0242ac110007"} {"source": "NCERT X Science, India", "document": "In Section 2.1 we have seen that all acids have similar chemical properties. What leads to this similarity in properties? We saw in Activity 2.3 that all acids generate hydrogen gas on reacting with metals, so hydrogen seems to be common to all acids. Let us perform an Activity to investigate whether all compounds containing hydrogen are acidic.\n\nThe bulb will start glowing in the case of acids, as shown in Fig. 2.3. But you will observe that glucose and alcohol solutions do not conduct electricity. Glowing of the bulb indicates that there is a flow of electric current through the solution. The electric current is carried through the acidic solution by ions.\n\nThe process of dissolving an acid or a base in water is a highly exothermic one. Care must be taken while mixing concentrated nitric acid or sulphuric acid with water. The acid must always be added slowly to water with constant stirring. If water is added to a concentrated acid, the heat generated may cause the mixture to splash out and cause burns. The glass container may also break due to excessive local heating. Look out for the warning sign (shown in Fig. 2.5) on the can of concentrated sulphuric acid and on the bottle of sodium hydroxide pellets. Mixing an acid or base with water results in decrease in the concentration of ions per unit volume. Such a process is called dilution and the acid or the base is said to be diluted.\n\nWe can do this by making use of a universal indicator, which is a mixture of several indicators. The universal indicator shows different colours at different concentrations of hydrogen ions in a solution. A scale for measuring hydrogen ion concentration in a solution, called pH scale has been developed. The p in pH stands for \u2018potenz\u2019 in German, meaning power. On the pH scale we can measure pH generally from 0 (very acidic) to 14 (very alkaline). pH should be thought of simply as a number which indicates the acidic or basic nature of a solution. Higher the hydronium ion concentration, lower is the pH value.\n\nThe pH of a neutral solution is 7. Values less than 7 on the pH scale represent an acidic solution. As the pH value increases from 7 to 14, it represents an increase in OH\u2013 ion concentration in the solution, that is, increase in the strength of alkali (Fig. 2.6). Generally paper impregnated with the universal indicator is used for measuring pH.\n\nThe strength of acids and bases depends on the number of H+ ions and OH\u2013 ions produced, respectively. If we take hydrochloric acid and acetic acid of the same concentration, say one molar, then these produce different amounts of hydrogen ions. Acids that give rise to more H+ ions are said to be strong acids, and acids that give less H+ ions are said to be weak acids. Can you now say what weak and strong bases are?\n\nOur body works within the pH range of 7.0 to 7.8. Living organisms can survive only in a narrow range of pH change. When pH of rain water is less than 5.6, it is called acid rain. When acid rain flows into the rivers, it lowers the pH of the river water. The survival of aquatic life in such rivers becomes difficult.\n\nPlants require a specific pH range for their healthy growth. To find out the pH required for the healthy growth of a plant, you can collect the soil from various places and check the pH in the manner described below in Activity 2.12. Also, you can note down which plants are growing in the region from which you have collected the soil.\n\nIt is very interesting to note that our stomach produces hydrochloric acid. It helps in the digestion of food without harming the stomach. During indigestion the stomach produces too much acid and this causes pain and irritation. To get rid of this pain, people use bases called antacids. One such remedy must have been suggested by you at the beginning of this Chapter. These antacids neutralise the excess acid. Magnesium hydroxide (Milk of magnesia), a mild base, is often used for this purpose.\n\nTooth decay starts when the pH of the mouth is lower than 5.5. Tooth enamel, made up of calcium hydroxyapatite (a crystalline form of calcium phosphate) is the hardest substance in the body. It does not dissolve in water, but is corroded when the pH in the mouth is below 5.5. Bacteria present in the mouth produce acids by degradation of sugar and food particles remaining in the mouth after eating. The best way to prevent this is to clean the mouth after eating food. Using toothpastes, which are generally basic, for cleaning the teeth can neutralise the excess acid and prevent tooth decay.\n\nSalts of a strong acid and a strong base are neutral with pH value of 7. On the other hand, salts of a strong acid and weak base are acidic with pH value less than 7 and those of a strong base and weak acid are basic in nature, with pH value more than 7. By now you have learnt that the salt formed by the combination of hydrochloric acid and sodium hydroxide solution is called sodium chloride. This is the salt that you use in food. You must have observed in the above Activity that it is a neutral salt.\n\nSeawater contains many salts dissolved in it. Sodium chloride is separated from these salts. Deposits of solid salt are also found in several parts of the world. These large crystals are often brown due to impurities. This is called rock salt. Beds of rock salt were formed when seas of bygone ages dried up. Rock salt is mined like coal.", "doc_id": "eaae8596-4e08-11ed-bc6a-0242ac110007"} {"source": "NCERT X Science, India", "document": "You have learnt about oxidation reactions in the first Chapter. Carbon compounds can be easily oxidised on combustion. In addition to this complete oxidation, we have reactions in which alcohols are converted to carboxylic acids \u2013 We see that some substances are capable of adding oxygen to others. These substances are known as oxidising agents. Alkaline potassium permanganate or acidified potassium dichromate are oxidising alcohols to acids, that is, adding oxygen to the starting material. Hence they are known as oxidising agents.\n\nUnsaturated hydrocarbons add hydrogen in the presence of catalysts such as palladium or nickel to give saturated hydrocarbons. Catalysts are substances that cause a reaction to occur or proceed at a different rate without the reaction itself being affected. This reaction is commonly used in the hydrogenation of vegetable oils using a nickel catalyst. Vegetable oils generally have long unsaturated carbon chains while animal fats have saturated carbon chains.\n\nYou must have seen advertisements stating that some vegetable oils are \u2018healthy\u2019. Animal fats generally contain saturated fatty acids which are said to be harmful for health. Oils containing unsaturated fatty acids should be chosen for cooking.\n\nSaturated hydrocarbons are fairly unreactive and are inert in the presence of most reagents. However, in the presence of sunlight, chlorine is added to hydrocarbons in a very fast reaction. Chlorine can replace the hydrogen atoms one by one. It is called a substitution reaction because one type of atom or a group of atoms takes the place of another. A number of products are usually formed with the higher homologues of alkanes.\n\nMany carbon compounds are invaluable to us. But here we shall study the properties of two commercially important compounds \u2013 ethanol and ethanoic acid.\n\nEthanoic acid is commonly called acetic acid and belongs to a group of acids called carboxylic acids. 5-8% solution of acetic acid in water is called vinegar and is used widely as a preservative in pickles. The melting point of pure ethanoic acid is 290 K and hence it often freezes during winter in cold climates. This gave rise to its name glacial acetic acid.\n\nThe group of organic compounds called carboxylic acids are obviously characterised by their acidic nature. However, unlike mineral acids like HCl, which are completely ionised, carboxylic acids are weak acids.\n\nReactions of ethanoic acid:\n(i) Esterification reaction: Esters are most commonly formed by reaction of an acid and an alcohol. Ethanoic acid reacts with absolute ethanol in the presence of an acid catalyst to give an ester \u2013 Generally, esters are sweet-smelling substances. These are used in making perfumes and as flavouring agents. On treating with sodium hydroxide, which is an alkali, the ester is converted back to alcohol and sodium salt of carboxylic acid. This reaction is known as saponification because it is used in the preparation of soap. Soaps are sodium or potassium salts of long chain carboxylic acid.\n(ii) Reaction with a base: Like mineral acids, ethanoic acid reacts with a base such as sodium hydroxide to give a salt (sodium ethanoate or commonly called sodium acetate) and water:\n(iii) Reaction with carbonates and hydrogencarbonates: Ethanoic acid reacts with carbonates and hydrogencarbonates to give rise to a salt, carbon dioxide and water. The salt produced is commonly called sodium acetate.\n\nThis activity demonstrates the effect of soap in cleaning. Most dirt is oily in nature and as you know, oil does not dissolve in water. The molecules of soap are sodium or potassium salts of long-chain carboxylic acids. The ionic-end of soap interacts with water while the carbon chain interacts with oil. The soap molecules, thus form structures called micelles (see Fig. 4.12) where one end of the molecules is towards the oil droplet while the ionic-end faces outside. This forms an emulsion in water. The soap micelle thus helps in pulling out the dirt in water and we can wash our clothes clean (Fig. 4.13).\n\nHave you ever observed while bathing that foam is formed with difficulty and an insoluble substance (scum) remains after washing with water? This is caused by the reaction of soap with the calcium and magnesium salts, which cause the hardness of water. Hence you need to use a larger amount of soap. This problem is overcome by using another class of compounds called detergents as cleansing agents. Detergents are generally sodium salts of sulphonic acids or ammonium salts with chlorides or bromides ions, etc. Both have long hydrocarbon chain. The charged ends of these compounds do not form insoluble precipitates with the calcium and magnesium ions in hard water. Thus, they remain effective in hard water. Detergents are usually used to make shampoos and products for cleaning clothes.", "doc_id": "ab76fe92-4e0a-11ed-b786-0242ac110007"} {"source": "NCERT X Science, India", "document": "You have learnt about oxidation reactions in the first Chapter. Carbon compounds can be easily oxidised on combustion. In addition to this complete oxidation, we have reactions in which alcohols are converted to carboxylic acids \u2013 We see that some substances are capable of adding oxygen to others. These substances are known as oxidising agents. Alkaline potassium permanganate or acidified potassium dichromate are oxidising alcohols to acids, that is, adding oxygen to the starting material. Hence they are known as oxidising agents.\n\nUnsaturated hydrocarbons add hydrogen in the presence of catalysts such as palladium or nickel to give saturated hydrocarbons. Catalysts are substances that cause a reaction to occur or proceed at a different rate without the reaction itself being affected. This reaction is commonly used in the hydrogenation of vegetable oils using a nickel catalyst. Vegetable oils generally have long unsaturated carbon chains while animal fats have saturated carbon chains.\n\nYou must have seen advertisements stating that some vegetable oils are \u2018healthy\u2019. Animal fats generally contain saturated fatty acids which are said to be harmful for health. Oils containing unsaturated fatty acids should be chosen for cooking.\n\nSaturated hydrocarbons are fairly unreactive and are inert in the presence of most reagents. However, in the presence of sunlight, chlorine is added to hydrocarbons in a very fast reaction. Chlorine can replace the hydrogen atoms one by one. It is called a substitution reaction because one type of atom or a group of atoms takes the place of another. A number of products are usually formed with the higher homologues of alkanes.\n\nMany carbon compounds are invaluable to us. But here we shall study the properties of two commercially important compounds \u2013 ethanol and ethanoic acid.\n\nEthanoic acid is commonly called acetic acid and belongs to a group of acids called carboxylic acids. 5-8% solution of acetic acid in water is called vinegar and is used widely as a preservative in pickles. The melting point of pure ethanoic acid is 290 K and hence it often freezes during winter in cold climates. This gave rise to its name glacial acetic acid.\n\nThe group of organic compounds called carboxylic acids are obviously characterised by their acidic nature. However, unlike mineral acids like HCl, which are completely ionised, carboxylic acids are weak acids.\n\nReactions of ethanoic acid:\n(i) Esterification reaction: Esters are most commonly formed by reaction of an acid and an alcohol. Ethanoic acid reacts with absolute ethanol in the presence of an acid catalyst to give an ester \u2013 Generally, esters are sweet-smelling substances. These are used in making perfumes and as flavouring agents. On treating with sodium hydroxide, which is an alkali, the ester is converted back to alcohol and sodium salt of carboxylic acid. This reaction is known as saponification because it is used in the preparation of soap. Soaps are sodium or potassium salts of long chain carboxylic acid.\n(ii) Reaction with a base: Like mineral acids, ethanoic acid reacts with a base such as sodium hydroxide to give a salt (sodium ethanoate or commonly called sodium acetate) and water:\n(iii) Reaction with carbonates and hydrogencarbonates: Ethanoic acid reacts with carbonates and hydrogencarbonates to give rise to a salt, carbon dioxide and water. The salt produced is commonly called sodium acetate.\n\nThis activity demonstrates the effect of soap in cleaning. Most dirt is oily in nature and as you know, oil does not dissolve in water. The molecules of soap are sodium or potassium salts of long-chain carboxylic acids. The ionic-end of soap interacts with water while the carbon chain interacts with oil. The soap molecules, thus form structures called micelles (see Fig. 4.12) where one end of the molecules is towards the oil droplet while the ionic-end faces outside. This forms an emulsion in water. The soap micelle thus helps in pulling out the dirt in water and we can wash our clothes clean (Fig. 4.13).\n\nHave you ever observed while bathing that foam is formed with difficulty and an insoluble substance (scum) remains after washing with water? This is caused by the reaction of soap with the calcium and magnesium salts, which cause the hardness of water. Hence you need to use a larger amount of soap. This problem is overcome by using another class of compounds called detergents as cleansing agents. Detergents are generally sodium salts of sulphonic acids or ammonium salts with chlorides or bromides ions, etc. Both have long hydrocarbon chain. The charged ends of these compounds do not form insoluble precipitates with the calcium and magnesium ions in hard water. Thus, they remain effective in hard water. Detergents are usually used to make shampoos and products for cleaning clothes.", "doc_id": "a1410656-4e0b-11ed-9f6a-0242ac110007"} {"source": "NCERT X Science, India", "document": "Even after the rejection of Newlands\u2019 Law of Octaves, many scientists continued to search for a pattern that correlated the properties of elements with their atomic masses.\n\nThe main credit for classifying elements goes to Dmitri Ivanovich Mendel\u00e9ev, a Russian chemist. He was the most important contributor to the early development of a Periodic Table of elements wherein the elements were arranged on the basis of their fundamental property, the atomic mass, and also on the similarity of chemical properties.\n\nWhen Mendel\u00e9ev started his work, 63 elements were known. He examined the relationship between the atomic masses of the elements and their physical and chemical properties. Among chemical properties, Mendel\u00e9ev concentrated on the compounds formed by elements with oxygen and hydrogen. He selected hydrogen and oxygen as they are very reactive and formed compounds with most elements. The formulae of the hydrides and oxides formed by an element were treated as one of the basic properties of an element for its classification. He then took 63 cards and on each card he wrote down the properties of one element. He sorted out the elements with similar properties and pinned the cards together on a wall. He observed that most of the elements got a place in a Periodic Table and were arranged in the order of their increasing atomic masses. It was also observed that there occurs a periodic recurrence of elements with similar physical and chemical properties. On this basis, Mendel\u00e9ev formulated a Periodic Law, which states that \u2018the properties of elements are the periodic function of their atomic masses\u2019.\n\nWhile developing the Periodic Table, there were a few instances where Mendel\u00e9ev had to place an element with a slightly greater atomic mass before an element with a slightly lower atomic mass. The sequence was inverted so that elements with similar properties could be grouped together. For example, cobalt (atomic mass 58.9) appeared before nickel (atomic mass 58.7). Looking at Table 5.4, can you find out one more such anomaly?\n\nFurther, Mendel\u00e9ev left some gaps in his Periodic Table. Instead of looking upon these gaps as defects, Mendel\u00e9ev boldly predicted the existence of some elements that had not been discovered at that time. Mendel\u00e9ev named them by prefixing a Sanskrit numeral, Eka (one) to the name of preceding element in the same group. For instance, scandium, gallium and germanium, discovered later, have properties similar to Eka\u2013boron, Eka aluminium and Eka\u2013silicon, respectively. The properties of Eka\u2013Aluminium predicted by Mendel\u00e9ev and those of the element, gallium which was discovered later and replaced Eka\u0002aluminium, are listed as follows.\n\nThis provided convincing evidence for both the correctness and usefulness of Mendel\u00e9ev\u2019s Periodic Table. Further, it was the extraordinary success of Mendel\u00e9ev\u2019s prediction that led chemists not only to accept his Periodic Table but also recognise him, as the originator of the concept on which it is based. Noble gases like helium (He), neon (Ne) and argon (Ar) have been mentioned in many a context before this. These gases were discovered very late because they are very inert and present in extremely low concentrations in our atmosphere. One of the strengths of Mendel\u00e9ev\u2019s Periodic Table was that, when these gases were discovered, they could be placed in a new group without disturbing the existing order.\n\nElectronic configuration of hydrogen resembles that of alkali metals. Like alkali metals, hydrogen combines with halogens, oxygen and sulphur to form compounds having similar formulae, as shown in the examples here. On the other hand, just like halogens, hydrogen also exists as diatomic molecules and it combines with metals and non-metals to form covalent compounds.\n\nCertainly, no fixed position can be given to hydrogen in the Periodic Table. This was the first limitation of Mendel\u00e9ev\u2019s Periodic Table. He could not assign a correct position to hydrogen in his Table. Isotopes were discovered long after Mendel\u00e9ev had proposed his periodic classification of elements. Let us recall that isotopes of an element have similar chemical properties, but different atomic masses.\n\nThus, isotopes of all elements posed a challenge to Mendeleev\u2019s Periodic Law. Another problem was that the atomic masses do not increase in a regular manner in going from one element to the next. So it was not possible to predict how many elements could be discovered between two elements \u2014 especially when we consider the heavier elements.\n\nIn 1913, Henry Moseley showed that the atomic number (symbolised as Z) of an element is a more fundamental property than its atomic mass. Accordingly, Mendel\u00e9ev\u2019s Periodic Law was modified and atomic number was adopted as the basis of Modern Periodic Table and the Modern Periodic Law can be stated as follows: \u2018Properties of elements are a periodic function of their atomic number.\u2019 Let us recall that the atomic number gives us the number of protons in the nucleus of an atom and this number increases by one in going from one element to the next. Elements, when arranged in order of increasing atomic number, lead us to the classification known as the Modern Periodic Table (Table 5.6). Prediction of properties of elements could be made with more precision when elements were arranged on the basis of increasing atomic number.\n\nAs we can see, the Modern Periodic Table takes care of three limitations of Mendl\u00e9ev\u2019s Periodic Table. The anomalous position of hydrogen can be discussed after we see what are the bases on which the position of an element in the Modern Periodic Table depends.\n\nThe Modern Periodic Table has 18 vertical columns known as \u2018groups\u2019 and 7 horizontal rows known as \u2018periods\u2019. Let us see what decides the placing of an element in a certain group and period.\n\nYou will find that all these elements contain the same number of valence electrons. Similarly, you will find that the elements present in any one group have the same number of valence electrons. For example, elements fluorine (F) and chlorine (Cl), belong to group 17, how many electrons do fluorine and chlorine have in their outermost shells? Hence, we can say that groups in the Periodic Table signify an identical outer\u0002shell electronic configuration. On the other hand, the number of shells increases as we go down the group.", "doc_id": "f9316dc2-4e0d-11ed-92f8-0242ac110007"} {"source": "NCERT X Science, India", "document": "Further, Mendel\u00e9ev left some gaps in his Periodic Table. Instead of looking upon these gaps as defects, Mendel\u00e9ev boldly predicted the existence of some elements that had not been discovered at that time. Mendel\u00e9ev named them by prefixing a Sanskrit numeral, Eka (one) to the name of preceding element in the same group. For instance, scandium, gallium and germanium, discovered later, have properties similar to Eka\u2013boron, Eka aluminium and Eka\u2013silicon, respectively. The properties of Eka\u2013Aluminium predicted by Mendel\u00e9ev and those of the element, gallium which was discovered later and replaced Eka\u0002aluminium, are listed as follows.\n\nThis provided convincing evidence for both the correctness and usefulness of Mendel\u00e9ev\u2019s Periodic Table. Further, it was the extraordinary success of Mendel\u00e9ev\u2019s prediction that led chemists not only to accept his Periodic Table but also recognise him, as the originator of the concept on which it is based. Noble gases like helium (He), neon (Ne) and argon (Ar) have been mentioned in many a context before this. These gases were discovered very late because they are very inert and present in extremely low concentrations in our atmosphere. One of the strengths of Mendel\u00e9ev\u2019s Periodic Table was that, when these gases were discovered, they could be placed in a new group without disturbing the existing order.\n\nElectronic configuration of hydrogen resembles that of alkali metals. Like alkali metals, hydrogen combines with halogens, oxygen and sulphur to form compounds having similar formulae, as shown in the examples here. On the other hand, just like halogens, hydrogen also exists as diatomic molecules and it combines with metals and non-metals to form covalent compounds.\n\nCertainly, no fixed position can be given to hydrogen in the Periodic Table. This was the first limitation of Mendel\u00e9ev\u2019s Periodic Table. He could not assign a correct position to hydrogen in his Table. Isotopes were discovered long after Mendel\u00e9ev had proposed his periodic classification of elements. Let us recall that isotopes of an element have similar chemical properties, but different atomic masses.\n\nThus, isotopes of all elements posed a challenge to Mendeleev\u2019s Periodic Law. Another problem was that the atomic masses do not increase in a regular manner in going from one element to the next. So it was not possible to predict how many elements could be discovered between two elements \u2014 especially when we consider the heavier elements.\n\nIn 1913, Henry Moseley showed that the atomic number (symbolised as Z) of an element is a more fundamental property than its atomic mass. Accordingly, Mendel\u00e9ev\u2019s Periodic Law was modified and atomic number was adopted as the basis of Modern Periodic Table and the Modern Periodic Law can be stated as follows: \u2018Properties of elements are a periodic function of their atomic number.\u2019 Let us recall that the atomic number gives us the number of protons in the nucleus of an atom and this number increases by one in going from one element to the next. Elements, when arranged in order of increasing atomic number, lead us to the classification known as the Modern Periodic Table (Table 5.6). Prediction of properties of elements could be made with more precision when elements were arranged on the basis of increasing atomic number.\n\nAs we can see, the Modern Periodic Table takes care of three limitations of Mendl\u00e9ev\u2019s Periodic Table. The anomalous position of hydrogen can be discussed after we see what are the bases on which the position of an element in the Modern Periodic Table depends.\n\nThe Modern Periodic Table has 18 vertical columns known as \u2018groups\u2019 and 7 horizontal rows known as \u2018periods\u2019. Let us see what decides the placing of an element in a certain group and period.\n\nYou will find that all these elements contain the same number of valence electrons. Similarly, you will find that the elements present in any one group have the same number of valence electrons. For example, elements fluorine (F) and chlorine (Cl), belong to group 17, how many electrons do fluorine and chlorine have in their outermost shells? Hence, we can say that groups in the Periodic Table signify an identical outer\u0002shell electronic configuration. On the other hand, the number of shells increases as we go down the group.\n\nYou will find that these elements of second period do not have the same number of valence electrons, but they contain the same number of shells. You also observe that the number of valence shell electrons increases by one unit, as the atomic number increases by one unit on moving from left to right in a period. Or we can say that atoms of different elements with the same number of occupied shells are placed in the same period. Na, Mg, Al, Si, P, S, Cl and Ar belong to the third period of the Modern Periodic Table, since the electrons in the atoms of these elements are filled in K, L and M shells.\n\nHow many elements are there in the first, second, third and fourth periods? We can explain the number of elements in these periods based on how electrons are filled into various shells. You will study the details of this in higher classes. Recall that the maximum number of electrons that can be accommodated in a shell depends on the formula 2n squared where \u2018n\u2019 is the number of the given shell from the nucleus.\n\nThe position of an element in the Periodic Table tells us about its chemical reactivity. As you have learnt, the valence electrons determine the kind and number of bonds formed by an element. Can you now say why Mendel\u00e9ev\u2019s choice of formulae of compounds as the basis for deciding the position of an element in his Table was a good one? How would this lead to elements with similar chemical properties being placed in the same group?", "doc_id": "7028c77c-4e0e-11ed-8f18-0242ac110007"} {"source": "NCERT X Science, India", "document": "We have discussed nutrition in organisms in the last section. The food material taken in during the process of nutrition is used in cells to provide energy for various life processes. Diverse organisms do this in different ways \u2013 some use oxygen to break-down glucose completely into carbon dioxide and water, some use other pathways that do not involve oxygen (Fig. 6.8). In all cases, the first step is the break-down of glucose, a six-carbon molecule, into a three-carbon molecule called pyruvate. This process takes place in the cytoplasm. Further, the pyruvate may be converted into ethanol and carbon dioxide. This process takes place in yeast during fermentation. Since this process takes place in the absence of air (oxygen), it is called anaerobic respiration. Break\u0002down of pyruvate using oxygen takes place in the mitochondria. This process breaks up the three-carbon pyruvate molecule to give three molecules of carbon dioxide. The other product is water. Since this process takes place in the presence of air (oxygen), it is called aerobic respiration. The release of energy in this aerobic process is a lot greater than in the anaerobic process. Sometimes, when there is a lack of oxygen in our muscle cells, another pathway for the break-down of pyruvate is taken. Here the pyruvate is converted into lactic acid which is also a three-carbon molecule. This build-up of lactic acid in our muscles during sudden activity causes cramps.\n\nThe energy released during cellular respiration is immediately used to synthesise a molecule called ATP which is used to fuel all other activities in the cell. In these processes, ATP is broken down giving rise to a fixed amount of energy which can drive the endothermic reactions taking place in the cell.\n\nSince the aerobic respiration pathway depends on oxygen, aerobic organisms need to ensure that there is sufficient intake of oxygen. We have seen that plants exchange gases through stomata, and the large inter-cellular spaces ensure that all cells are in contact with air. Carbon dioxide and oxygen are exchanged by diffusion here. They can go into cells, or away from them and out into the air. The direction of diffusion depends upon the environmental conditions and the requirements of the plant. At night, when there is no photosynthesis occurring, CO2 elimination is the major exchange activity going on. During the day, CO2 generated during respiration is used up for photosynthesis, hence there is no CO2 release. Instead, oxygen release is the major event at this time. Animals have evolved different organs for the uptake of oxygen from the environment and for getting rid of the carbon dioxide produced. Terrestrial animals can breathe the oxygen in the atmosphere, but animals that live in water need to use the oxygen dissolved in water.\n\nSince the amount of dissolved oxygen is fairly low compared to the amount of oxygen in the air, the rate of breathing in aquatic organisms is much faster than that seen in terrestrial organisms. Fishes take in water through their mouths and force it past the gills where the dissolved oxygen is taken up by blood. Terrestrial organisms use the oxygen in the atmosphere for respiration. This oxygen is absorbed by different organs in different animals. All these organs have a structure that increases the surface area which is in contact with the oxygen-rich atmosphere. Since the exchange of oxygen and carbon dioxide has to take place across this surface, this surface is very fine and delicate. In order to protect this surface, it is usually placed within the body, so there have to be passages that will take air to this area. In addition, there is a mechanism for moving the air in and out of this area where the oxygen is absorbed.\n\nIn human beings (Fig. 6.9), air is taken into the body through the nostrils. The air passing through the nostrils is filtered by fine hairs that line the passage. The passage is also lined with mucus which helps in this process. From here, the air passes through the throat and into the lungs. Rings of cartilage are present in the throat. These ensure that the air-passage does not collapse.\n\nWithin the lungs, the passage divides into smaller and smaller tubes which finally terminate in balloon-like structures which are called alveoli (singular\u2013alveolus). The alveoli provide a surface where the exchange of gases can take place. The walls of the alveoli contain an extensive network of blood-vessels. As we have seen in earlier years, when we breathe in, we lift our ribs and flatten our diaphragm, and the chest cavity becomes larger as a result. Because of this, air is sucked into the lungs and fills the expanded alveoli. The blood brings carbon dioxide from the rest of the body for release into the alveoli, and the oxygen in the alveolar air is taken up by blood in the alveolar blood vessels to be transported to all the cells in the body. During the breathing cycle, when air is taken in and let out, the lungs always contain a residual volume of air so that there is sufficient time for oxygen to be absorbed and for the carbon dioxide to be released.\n\nWhen the body size of animals is large, the diffusion pressure alone cannot take care of oxygen delivery to all parts of the body. Instead, respiratory pigments take up oxygen from the air in the lungs and carry it to tissues which are deficient in oxygen before releasing it. In human beings, the respiratory pigment is haemoglobin which has a very high affinity for oxygen. This pigment is present in the red blood corpuscles. Carbon dioxide is more soluble in water than oxygen is and hence is mostly transported in the dissolved form in our blood.", "doc_id": "70dc7d34-4e0f-11ed-b034-0242ac110007"} {"source": "NCERT X Science, India", "document": "In animals, such control and coordination are provided by nervous and muscular tissues, which we have studied in Class IX. Touching a hot object is an urgent and dangerous situation for us. We need to detect it, and respond to it. How do we detect that we are touching a hot object? All information from our environment is detected by the specialised tips of some nerve cells. These receptors are usually located in our sense organs, such as the inner ear, the nose, the tongue, and so on. So gustatory receptors will detect taste while olfactory receptors will detect smell. \n\nThis information, acquired at the end of the dendritic tip of a nerve cell [Fig. 7.1 (a)], sets off a chemical reaction that creates an electrical impulse. This impulse travels from the dendrite to the cell body, and then along the axon to its end. At the end of the axon, the electrical impulse sets off the release of some chemicals. These chemicals cross the gap, or synapse, and start a similar electrical impulse in a dendrite of the next neuron. This is a general scheme of how nervous impulses travel in the body. A similar synapse finally allows delivery of such impulses from neurons to other cells, such as muscles cells or gland [Fig. 7.1 (b)]. It is thus no surprise that nervous tissue is made up of an organised network of nerve cells or neurons, and is specialised for conducting information via electrical impulses from one part of the body to another. Look at Fig. 7.1 (a) and identify the parts of a neuron (i) where information is acquired, (ii) through which information travels as an electrical impulse, and (iii) where this impulse must be converted into a chemical signal for onward transmission.\n\n\u2018Reflex\u2019 is a word we use very commonly when we talk about some sudden action in response to something in the environment. We say \u2018I jumped out of the way of the bus reflexly\u2019, or \u2018I pulled my hand back from the flame reflexly\u2019, or \u2018I was so hungry my mouth started watering reflexly\u2019. What exactly do we mean? A common idea in all such examples is that we do something without thinking about it, or without feeling in control of our reactions. Yet these are situations where we are responding with some action to changes in our environment. How is control and coordination achieved in such situations?\n\nLet us consider this further. Take one of our examples. Touching a flame is an urgent and dangerous situation for us, or in fact, for any animal! How would we respond to this? One seemingly simple way is to think consciously about the pain and the possibility of getting burnt, and therefore move our hand. An important question then is, how long will it take us to think all this? The answer depends on how we think. If nerve impulses are sent around the way we have talked about earlier, then thinking is also likely to involve the creation of such impulses. Thinking is a complex activity, so it is bound to involve a complicated interaction of many nerve impulses from many neurons.\n \nIf this is the case, it is no surprise that the thinking tissue in our body consists of dense networks of intricately arranged neurons. It sits in the forward end of the skull, and receives signals from all over the body which it thinks about before responding to them. Obviously, in order to receive these signals, this thinking part of the brain in the skull must be connected to nerves coming from various parts of the body. Similarly, if this part of the brain is to instruct muscles to move, nerves must carry this signal back to different parts of the body. If all of this is to be done when we touch a hot object, it may take enough time for us to get burnt!\n\nHow does the design of the body solve this problem? Rather than having to think about the sensation of heat, if the nerves that detect heat were to be connected to the nerves that move muscles in a simpler way, the process of detecting the signal or the input and responding to it by an output action might be completed quickly. Such a connection is commonly called a reflex arc (Fig. 7.2). Where should such reflex arc connections be made between the input nerve and the output nerve? The best place, of course, would be at the point where they first meet each other. Nerves from all over the body meet in a bundle in the spinal cord on their way to the brain. Reflex arcs are formed in this spinal cord itself, although the information input also goes on to reach the brain. Of course, reflex arcs have evolved in animals because the thinking process of the brain is not fast enough. In fact many animals have very little or none of the complex neuron network needed for thinking. So it is quite likely that reflex arcs have evolved as efficient ways of functioning in the absence of true thought processes. However, even after complex neuron networks have come into existence, reflex arcs continue to be more efficient for quick responses.\n\nIs reflex action the only function of the spinal cord? Obviously not, since we know that we are thinking beings. Spinal cord is made up of nerves which supply information to think about. Thinking involves more complex mechanisms and neural connections. These are concentrated in the brain, which is the main coordinating centre of the body. The brain and spinal cord constitute the central nervous system. They receive information from all parts of the body and integrate it.\n\nWe also think about our actions. Writing, talking, moving a chair, clapping at the end of a programme are examples of voluntary actions which are based on deciding what to do next. So, the brain also has to send messages to muscles. This is the second way in which the nervous system communicates with the muscles. The communication between the central nervous system and the other parts of the body is facilitated by the peripheral nervous system consisting of cranial nerves arising from the brain and spinal nerves arising from the spinal cord. The brain thus allows us to think and take actions based on that thinking. As you will expect, this is accomplished through a complex design, with different parts of the brain responsible for integrating different inputs and outputs. The brain has three such major parts or regions, namely the fore-brain, mid-brain and hind-brain.", "doc_id": "89d5a45a-4e14-11ed-8127-0242ac110007"} {"source": "NCERT X Science, India", "document": "\u2018Reflex\u2019 is a word we use very commonly when we talk about some sudden action in response to something in the environment. We say \u2018I jumped out of the way of the bus reflexly\u2019, or \u2018I pulled my hand back from the flame reflexly\u2019, or \u2018I was so hungry my mouth started watering reflexly\u2019. What exactly do we mean? A common idea in all such examples is that we do something without thinking about it, or without feeling in control of our reactions. Yet these are situations where we are responding with some action to changes in our environment. How is control and coordination achieved in such situations?\n\nLet us consider this further. Take one of our examples. Touching a flame is an urgent and dangerous situation for us, or in fact, for any animal! How would we respond to this? One seemingly simple way is to think consciously about the pain and the possibility of getting burnt, and therefore move our hand. An important question then is, how long will it take us to think all this? The answer depends on how we think. If nerve impulses are sent around the way we have talked about earlier, then thinking is also likely to involve the creation of such impulses. Thinking is a complex activity, so it is bound to involve a complicated interaction of many nerve impulses from many neurons.\n \nIf this is the case, it is no surprise that the thinking tissue in our body consists of dense networks of intricately arranged neurons. It sits in the forward end of the skull, and receives signals from all over the body which it thinks about before responding to them. Obviously, in order to receive these signals, this thinking part of the brain in the skull must be connected to nerves coming from various parts of the body. Similarly, if this part of the brain is to instruct muscles to move, nerves must carry this signal back to different parts of the body. If all of this is to be done when we touch a hot object, it may take enough time for us to get burnt!\n\nHow does the design of the body solve this problem? Rather than having to think about the sensation of heat, if the nerves that detect heat were to be connected to the nerves that move muscles in a simpler way, the process of detecting the signal or the input and responding to it by an output action might be completed quickly. Such a connection is commonly called a reflex arc (Fig. 7.2). Where should such reflex arc connections be made between the input nerve and the output nerve? The best place, of course, would be at the point where they first meet each other. Nerves from all over the body meet in a bundle in the spinal cord on their way to the brain. Reflex arcs are formed in this spinal cord itself, although the information input also goes on to reach the brain. Of course, reflex arcs have evolved in animals because the thinking process of the brain is not fast enough. In fact many animals have very little or none of the complex neuron network needed for thinking. So it is quite likely that reflex arcs have evolved as efficient ways of functioning in the absence of true thought processes. However, even after complex neuron networks have come into existence, reflex arcs continue to be more efficient for quick responses.\n\nIs reflex action the only function of the spinal cord? Obviously not, since we know that we are thinking beings. Spinal cord is made up of nerves which supply information to think about. Thinking involves more complex mechanisms and neural connections. These are concentrated in the brain, which is the main coordinating centre of the body. The brain and spinal cord constitute the central nervous system. They receive information from all parts of the body and integrate it.\n\nWe also think about our actions. Writing, talking, moving a chair, clapping at the end of a programme are examples of voluntary actions which are based on deciding what to do next. So, the brain also has to send messages to muscles. This is the second way in which the nervous system communicates with the muscles. The communication between the central nervous system and the other parts of the body is facilitated by the peripheral nervous system consisting of cranial nerves arising from the brain and spinal nerves arising from the spinal cord. The brain thus allows us to think and take actions based on that thinking. As you will expect, this is accomplished through a complex design, with different parts of the brain responsible for integrating different inputs and outputs. The brain has three such major parts or regions, namely the fore-brain, mid-brain and hind-brain.\n\nThe fore-brain is the main thinking part of the brain. It has regions which receive sensory impulses from various receptors. Separate areas of the fore-brain are specialised for hearing, smell, sight and so on. There are separate areas of association where this sensory information is interpreted by putting it together with information from other receptors as well as with information that is already stored in the brain. Based on all this, a decision is made about how to respond and the information is passed on to the motor areas which control the movement of voluntary muscles, for example, our leg muscles. However, certain sensations are distinct from seeing or hearing, for example, how do we know that we have eaten enough? The sensation of feeling full is because of a centre associated with hunger, which is in a separate part of the fore-brain.\n\nLet us look at the other use of the word \u2018reflex\u2019 that we have talked about in the introduction. Our mouth waters when we see food we like without our meaning to. Our hearts beat without our thinking about it. In fact, we cannot control these actions easily by thinking about them even if we wanted to. Do we have to think about or remember to breathe or digest food? So, in between the simple reflex actions like change in the size of the pupil, and the thought out actions such as moving a chair, there is another set of muscle movements over which we do not have any thinking control. Many of these involuntary actions are controlled by the mid-brain and hind-brain. All these involuntary actions including blood pressure, salivation and vomiting are controlled by the medulla in the hind-brain.\n\nThink about activities like walking in a straight line, riding a bicycle, picking up a pencil. These are possible due to a part of the hind-brain called the cerebellum. It is responsible for precision of voluntary actions and maintaining the posture and balance of the body. Imagine what would happen if each of these events failed to take place if we were not thinking about it.", "doc_id": "ba5e1e2c-4e14-11ed-af9f-0242ac110007"} {"source": "NCERT X Science, India", "document": "Animals have a nervous system for controlling and coordinating the activities of the body. But plants have neither a nervous system nor muscles. So, how do they respond to stimuli? When we touch the leaves of a chhui-mui (the \u2018touch-me-not\u2019 plant of the Mimosa family), they begin to fold up and droop. When a seed germinates, the root goes down, the stem comes up into the air.\n\nThere is no growth involved in this movement. On the other hand, the directional movement of a seedling is caused by growth. If it is prevented from growing, it will not show any movement. So plants show two different types of movement \u2013 one dependent on growth and the other independent of growth.\n\nLet us think about the first kind of movement, such as that of the sensitive plant. Since no growth is involved, the plant must actually move its leaves in response to touch. But there is no nervous tissue, nor any muscle tissue. How does the plant detect the touch, and how do the leaves move in response?\n\nIf we think about where exactly the plant is touched, and what part of the plant actually moves, it is apparent that movement happens at a point different from the point of touch. So, information that a touch has occurred must be communicated. The plants also use electrical-chemical means to convey this information from cell to cell, but unlike in animals, there is no specialised tissue in plants for the conduction of information. Finally, again as in animals, some cells must change shape in order for movement to happen. Instead of the specialised proteins found in animal muscle cells, plant cells change shape by changing the amount of water in them, resulting in swelling or shrinking, and therefore in changing shapes.\n\nSome plants like the pea plant climb up other plants or fences by means of tendrils. These tendrils are sensitive to touch. When they come in contact with any support, the part of the tendril in contact with the object does not grow as rapidly as the part of the tendril away from the object. This causes the tendril to circle around the object and thus cling to it. More commonly, plants respond to stimuli slowly by growing in a particular direction. Because this growth is directional, it appears as if the plant is moving. Let us understand this type of movement with the help of an example.\n\nEnvironmental triggers such as light, or gravity will change the directions that plant parts grow in. These directional, or tropic, movements can be either towards the stimulus, or away from it. So, in two different kinds of phototropic movement, shoots respond by bending towards light while roots respond by bending away from it. How does this help the plant?\n\nPlants show tropism in response to other stimuli as well. The roots of a plant always grow downwards while the shoots usually grow upwards and away from the earth. This upward and downward growth of shoots and roots, respectively, in response to the pull of earth or gravity is, obviously, geotropism.\n\nLet us now once again think about how information is communicated in the bodies of multicellular organisms. The movement of the sensitive plant in response to touch is very quick. The movement of sunflowers in response to day or night, on the other hand, is quite slow. Growth-related movement of plants will be even slower. Even in animal bodies, there are carefully controlled directions to growth. Our arms and fingers grow in certain directions, not haphazardly. So controlled movements can be either slow or fast. If fast responses to stimuli are to be made, information transfer must happen very quickly. For this, the medium of transmission must be able to move rapidly.\n\nElectrical impulses are an excellent means for this. But there are limitations to the use of electrical impulses. Firstly, they will reach only those cells that are connected by nervous tissue, not each and every cell in the animal body. Secondly, once an electrical impulse is generated in a cell and transmitted, the cell will take some time to reset its mechanisms before it can generate and transmit a new impulse. In other words, cells cannot continually create and transmit electrical impulses. It is thus no wonder that most multicellular organisms use another means of communication between cells, namely, chemical communication.\n\nIf, instead of generating an electrical impulse, stimulated cells release a chemical compound, this compound would diffuse all around the original cell. If other cells around have the means to detect this compound using special molecules on their surfaces, then they would be able to recognise information, and even transmit it. This will be slower, of course, but it can potentially reach all cells of the body, regardless of nervous connections, and it can be done steadily and persistently. These compounds, or hormones used by multicellular organisms for control and coordination show a great deal of diversity, as we would expect. Different plant hormones help to coordinate growth, development and responses to the environment. They are synthesised at places away from where they act and simply diffuse to the area of action.\n\nLet us take an example that we have worked with earlier. When growing plants detect light, a hormone called auxin, synthesised at the shoot tip, helps the cells to grow longer. When light is coming from one side of the plant, auxin diffuses towards the shady side of the shoot. This concentration of auxin stimulates the cells to grow longer on the side of the shoot which is away from light. Thus, the plant appears to bend towards light.\n\nAnother example of plant hormones are gibberellins which, like auxins, help in the growth of the stem. Cytokinins promote cell division, and it is natural then that they are present in greater concentration in areas of rapid cell division, such as in fruits and seeds. These are examples of plant hormones that help in promoting growth. But plants also need signals to stop growing. Abscisic acid is one example of a hormone which inhibits growth. Its effects include wilting of leaves.", "doc_id": "c7b5fa4e-4e15-11ed-bed3-0242ac110007"} {"source": "NCERT X Science, India", "document": "Let us take an example that we have worked with earlier. When growing plants detect light, a hormone called auxin, synthesised at the shoot tip, helps the cells to grow longer. When light is coming from one side of the plant, auxin diffuses towards the shady side of the shoot. This concentration of auxin stimulates the cells to grow longer on the side of the shoot which is away from light. Thus, the plant appears to bend towards light.\n\nAnother example of plant hormones are gibberellins which, like auxins, help in the growth of the stem. Cytokinins promote cell division, and it is natural then that they are present in greater concentration in areas of rapid cell division, such as in fruits and seeds. These are examples of plant hormones that help in promoting growth. But plants also need signals to stop growing. Abscisic acid is one example of a hormone which inhibits growth. Its effects include wilting of leaves.\n\nHow are such chemical, or hormonal, means of information transmission used in animals? What do some animals, for instance squirrels, experience when they are in a scary situation? Their bodies have to prepare for either fighting or running away. Both are very complicated activities that will use a great deal of energy in controlled ways. Many different tissue types will be used and their activities integrated together in these actions. However, the two alternate activities, fighting or running, are also quite different! So here is a situation in which some common preparations can be usefully made in the body. These preparations should ideally make it easier to do either activity in the near future. How would this be achieved?\n\nIf the body design in the squirrel relied only on electrical impulses via nerve cells, the range of tissues instructed to prepare for the coming activity would be limited. On the other hand, if a chemical signal were to be sent as well, it would reach all cells of the body and provide the wide\u0002ranging changes needed. This is done in many animals, including human beings, using a hormone called adrenaline that is secreted from the adrenal glands. Look at Fig. 7.7 to locate these glands.\n\nAdrenaline is secreted directly into the blood and carried to different parts of the body. The target organs or the specific tissues on which it acts include the heart. As a result, the heart beats faster, resulting in supply of more oxygen to our muscles. The blood to the digestive system and skin is reduced due to contraction of muscles around small arteries in these organs. This diverts the blood to our skeletal muscles. The breathing rate also increases because of the contractions of the diaphragm and the rib muscles. All these responses together enable the animal body to be ready to deal with the situation. Such animal hormones are part of the endocrine system which constitutes a second way of control and coordination in our body.\n\nRemember that plants have hormones that control their directional growth. What functions do animal hormones perform? On the face of it, we cannot imagine their role in directional growth. We have never seen an animal growing more in one direction or the other, depending on light or gravity! But if we think about it a bit more, it will become evident that, even in animal bodies, growth happens in carefully controlled places. Plants will grow leaves in many places on the plant body, for example. But we do not grow fingers on our faces. The design of the body is carefully maintained even during the growth of children.\n\nLet us examine some examples to understand how hormones help in coordinated growth. We have all seen salt packets which say \u2018iodised salt\u2019 or \u2018enriched with iodine\u2019. Why is it important for us to have iodised salt in our diet? Iodine is necessary for the thyroid gland to make thyroxin hormone. Thyroxin regulates carbohydrate, protein and fat metabolism in the body so as to provide the best balance for growth. Iodine is essential for the synthesis of thyroxin. In case iodine is deficient in our diet, there is a possibility that we might suffer from goitre. One of the symptoms in this disease is a swollen neck. Can you correlate this with the position of the thyroid gland in Fig. 7.7?\n\nSometimes we come across people who are either very short (dwarfs) or extremely tall (giants). Have you ever wondered how this happens? Growth hormone is one of the hormones secreted by the pituitary. As its name indicates, growth hormone regulates growth and development of the body. If there is a deficiency of this hormone in childhood, it leads to dwarfism. You must have noticed many dramatic changes in your appearance as well as that of your friends as you approached 10\u201312 years of age. These changes associated with puberty are because of the secretion of testosterone in males and oestrogen in females.\n\nDo you know anyone in your family or friends who has been advised by the doctor to take less sugar in their diet because they are suffering from diabetes? As a treatment, they might be taking injections of insulin. This is a hormone which is produced by the pancreas and helps in regulating blood sugar levels. If it is not secreted in proper amounts, the sugar level in the blood rises causing many harmful effects.\n\nIf it is so important that hormones should be secreted in precise quantities, we need a mechanism through which this is done. The timing and amount of hormone released are regulated by feedback mechanisms. For example, if the sugar levels in blood rise, they are detected by the cells of the pancreas which respond by producing more insulin. As the blood sugar level falls, insulin secretion is reduced.", "doc_id": "5bab93d0-4e16-11ed-ac04-0242ac110007"} {"source": "NCERT X Science, India", "document": "Let us take an example that we have worked with earlier. When growing plants detect light, a hormone called auxin, synthesised at the shoot tip, helps the cells to grow longer. When light is coming from one side of the plant, auxin diffuses towards the shady side of the shoot. This concentration of auxin stimulates the cells to grow longer on the side of the shoot which is away from light. Thus, the plant appears to bend towards light.\n\nAnother example of plant hormones are gibberellins which, like auxins, help in the growth of the stem. Cytokinins promote cell division, and it is natural then that they are present in greater concentration in areas of rapid cell division, such as in fruits and seeds. These are examples of plant hormones that help in promoting growth. But plants also need signals to stop growing. Abscisic acid is one example of a hormone which inhibits growth. Its effects include wilting of leaves.\n\nHow are such chemical, or hormonal, means of information transmission used in animals? What do some animals, for instance squirrels, experience when they are in a scary situation? Their bodies have to prepare for either fighting or running away. Both are very complicated activities that will use a great deal of energy in controlled ways. Many different tissue types will be used and their activities integrated together in these actions. However, the two alternate activities, fighting or running, are also quite different! So here is a situation in which some common preparations can be usefully made in the body. These preparations should ideally make it easier to do either activity in the near future. How would this be achieved?\n\nIf the body design in the squirrel relied only on electrical impulses via nerve cells, the range of tissues instructed to prepare for the coming activity would be limited. On the other hand, if a chemical signal were to be sent as well, it would reach all cells of the body and provide the wide\u0002ranging changes needed. This is done in many animals, including human beings, using a hormone called adrenaline that is secreted from the adrenal glands. Look at Fig. 7.7 to locate these glands.\n\nAdrenaline is secreted directly into the blood and carried to different parts of the body. The target organs or the specific tissues on which it acts include the heart. As a result, the heart beats faster, resulting in supply of more oxygen to our muscles. The blood to the digestive system and skin is reduced due to contraction of muscles around small arteries in these organs. This diverts the blood to our skeletal muscles. The breathing rate also increases because of the contractions of the diaphragm and the rib muscles. All these responses together enable the animal body to be ready to deal with the situation. Such animal hormones are part of the endocrine system which constitutes a second way of control and coordination in our body.\n\nRemember that plants have hormones that control their directional growth. What functions do animal hormones perform? On the face of it, we cannot imagine their role in directional growth. We have never seen an animal growing more in one direction or the other, depending on light or gravity! But if we think about it a bit more, it will become evident that, even in animal bodies, growth happens in carefully controlled places. Plants will grow leaves in many places on the plant body, for example. But we do not grow fingers on our faces. The design of the body is carefully maintained even during the growth of children.\n\nLet us examine some examples to understand how hormones help in coordinated growth. We have all seen salt packets which say \u2018iodised salt\u2019 or \u2018enriched with iodine\u2019. Why is it important for us to have iodised salt in our diet? Iodine is necessary for the thyroid gland to make thyroxin hormone. Thyroxin regulates carbohydrate, protein and fat metabolism in the body so as to provide the best balance for growth. Iodine is essential for the synthesis of thyroxin. In case iodine is deficient in our diet, there is a possibility that we might suffer from goitre. One of the symptoms in this disease is a swollen neck. Can you correlate this with the position of the thyroid gland in Fig. 7.7?\n\nSometimes we come across people who are either very short (dwarfs) or extremely tall (giants). Have you ever wondered how this happens? Growth hormone is one of the hormones secreted by the pituitary. As its name indicates, growth hormone regulates growth and development of the body. If there is a deficiency of this hormone in childhood, it leads to dwarfism. You must have noticed many dramatic changes in your appearance as well as that of your friends as you approached 10\u201312 years of age. These changes associated with puberty are because of the secretion of testosterone in males and oestrogen in females.\n\nDo you know anyone in your family or friends who has been advised by the doctor to take less sugar in their diet because they are suffering from diabetes? As a treatment, they might be taking injections of insulin. This is a hormone which is produced by the pancreas and helps in regulating blood sugar levels. If it is not secreted in proper amounts, the sugar level in the blood rises causing many harmful effects.\n\nIf it is so important that hormones should be secreted in precise quantities, we need a mechanism through which this is done. The timing and amount of hormone released are regulated by feedback mechanisms. For example, if the sugar levels in blood rise, they are detected by the cells of the pancreas which respond by producing more insulin. As the blood sugar level falls, insulin secretion is reduced.", "doc_id": "5d5834c2-4e16-11ed-a62b-0242ac110007"} {"source": "NCERT X Science, India", "document": "Let us take an example that we have worked with earlier. When growing plants detect light, a hormone called auxin, synthesised at the shoot tip, helps the cells to grow longer. When light is coming from one side of the plant, auxin diffuses towards the shady side of the shoot. This concentration of auxin stimulates the cells to grow longer on the side of the shoot which is away from light. Thus, the plant appears to bend towards light.\n\nAnother example of plant hormones are gibberellins which, like auxins, help in the growth of the stem. Cytokinins promote cell division, and it is natural then that they are present in greater concentration in areas of rapid cell division, such as in fruits and seeds. These are examples of plant hormones that help in promoting growth. But plants also need signals to stop growing. Abscisic acid is one example of a hormone which inhibits growth. Its effects include wilting of leaves.\n\nHow are such chemical, or hormonal, means of information transmission used in animals? What do some animals, for instance squirrels, experience when they are in a scary situation? Their bodies have to prepare for either fighting or running away. Both are very complicated activities that will use a great deal of energy in controlled ways. Many different tissue types will be used and their activities integrated together in these actions. However, the two alternate activities, fighting or running, are also quite different! So here is a situation in which some common preparations can be usefully made in the body. These preparations should ideally make it easier to do either activity in the near future. How would this be achieved?\n\nIf the body design in the squirrel relied only on electrical impulses via nerve cells, the range of tissues instructed to prepare for the coming activity would be limited. On the other hand, if a chemical signal were to be sent as well, it would reach all cells of the body and provide the wide\u0002ranging changes needed. This is done in many animals, including human beings, using a hormone called adrenaline that is secreted from the adrenal glands. Look at Fig. 7.7 to locate these glands.\n\nAdrenaline is secreted directly into the blood and carried to different parts of the body. The target organs or the specific tissues on which it acts include the heart. As a result, the heart beats faster, resulting in supply of more oxygen to our muscles. The blood to the digestive system and skin is reduced due to contraction of muscles around small arteries in these organs. This diverts the blood to our skeletal muscles. The breathing rate also increases because of the contractions of the diaphragm and the rib muscles. All these responses together enable the animal body to be ready to deal with the situation. Such animal hormones are part of the endocrine system which constitutes a second way of control and coordination in our body.\n\nRemember that plants have hormones that control their directional growth. What functions do animal hormones perform? On the face of it, we cannot imagine their role in directional growth. We have never seen an animal growing more in one direction or the other, depending on light or gravity! But if we think about it a bit more, it will become evident that, even in animal bodies, growth happens in carefully controlled places. Plants will grow leaves in many places on the plant body, for example. But we do not grow fingers on our faces. The design of the body is carefully maintained even during the growth of children.\n\nLet us examine some examples to understand how hormones help in coordinated growth. We have all seen salt packets which say \u2018iodised salt\u2019 or \u2018enriched with iodine\u2019. Why is it important for us to have iodised salt in our diet? Iodine is necessary for the thyroid gland to make thyroxin hormone. Thyroxin regulates carbohydrate, protein and fat metabolism in the body so as to provide the best balance for growth. Iodine is essential for the synthesis of thyroxin. In case iodine is deficient in our diet, there is a possibility that we might suffer from goitre. One of the symptoms in this disease is a swollen neck. Can you correlate this with the position of the thyroid gland in Fig. 7.7?\n\nSometimes we come across people who are either very short (dwarfs) or extremely tall (giants). Have you ever wondered how this happens? Growth hormone is one of the hormones secreted by the pituitary. As its name indicates, growth hormone regulates growth and development of the body. If there is a deficiency of this hormone in childhood, it leads to dwarfism. You must have noticed many dramatic changes in your appearance as well as that of your friends as you approached 10\u201312 years of age. These changes associated with puberty are because of the secretion of testosterone in males and oestrogen in females.\n\nDo you know anyone in your family or friends who has been advised by the doctor to take less sugar in their diet because they are suffering from diabetes? As a treatment, they might be taking injections of insulin. This is a hormone which is produced by the pancreas and helps in regulating blood sugar levels. If it is not secreted in proper amounts, the sugar level in the blood rises causing many harmful effects.\n\nIf it is so important that hormones should be secreted in precise quantities, we need a mechanism through which this is done. The timing and amount of hormone released are regulated by feedback mechanisms. For example, if the sugar levels in blood rise, they are detected by the cells of the pancreas which respond by producing more insulin. As the blood sugar level falls, insulin secretion is reduced.", "doc_id": "ed856cea-4e16-11ed-94ec-0242ac110007"} {"source": "NCERT X Science, India", "document": "In animals, such control and coordination are provided by nervous and muscular tissues, which we have studied in Class IX. Touching a hot object is an urgent and dangerous situation for us. We need to detect it, and respond to it. How do we detect that we are touching a hot object? All information from our environment is detected by the specialised tips of some nerve cells. These receptors are usually located in our sense organs, such as the inner ear, the nose, the tongue, and so on. So gustatory receptors will detect taste while olfactory receptors will detect smell.\n\nThis information, acquired at the end of the dendritic tip of a nerve cell [Fig. 7.1 (a)], sets off a chemical reaction that creates an electrical impulse. This impulse travels from the dendrite to the cell body, and then along the axon to its end. At the end of the axon, the electrical impulse sets off the release of some chemicals. These chemicals cross the gap, or synapse, and start a similar electrical impulse in a dendrite of the next neuron. This is a general scheme of how nervous impulses travel in the body. A similar synapse finally allows delivery of such impulses from neurons to other cells, such as muscles cells or gland [Fig. 7.1 (b)]. It is thus no surprise that nervous tissue is made up of an organised network of nerve cells or neurons, and is specialised for conducting information via electrical impulses from one part of the body to another. Look at Fig. 7.1 (a) and identify the parts of a neuron (i) where information is acquired, (ii) through which information travels as an electrical impulse, and (iii) where this impulse must be converted into a chemical signal for onward transmission. 'Reflex' is a word we use very commonly when we talk about some sudden action in response to something in the environment. We say \\u2018I jumped out of the way of the bus reflexly\\u2019, or \\u2018I pulled my hand back from the flame reflexly\\u2019, or \\u2018I was so hungry my mouth started watering reflexly\\u2019. What exactly do we mean? A common idea in all such examples is that we do something without thinking about it, or without feeling in control of our reactions. Yet these are situations where we are responding with some action to changes in our environment. How is control and coordination achieved in such situations?\n\nLet us consider this further. Take one of our examples. Touching a flame is an urgent and dangerous situation for us, or in fact, for any animal! How would we respond to this? One seemingly simple way is to think consciously about the pain and the possibility of getting burnt, and therefore move our hand. An important question then is, how long will it take us to think all this? The answer depends on how we think. If nerve impulses are sent around the way we have talked about earlier, then thinking is also likely to involve the creation of such impulses. Thinking is a complex activity, so it is bound to involve a complicated interaction of many nerve impulses from many neurons.\\n \\nIf this is the case, it is no surprise that the thinking tissue in our body consists of dense networks of intricately arranged neurons. It sits in the forward end of the skull, and receives signals from all over the body which it thinks about before responding to them. Obviously, in order to receive these signals, this thinking part of the brain in the skull must be connected to nerves coming from various parts of the body. Similarly, if this part of the brain is to instruct muscles to move, nerves must carry this signal back to different parts of the body. If all of this is to be done when we touch a hot object, it may take enough time for us to get burnt!\n\nHow does the design of the body solve this problem? Rather than having to think about the sensation of heat, if the nerves that detect heat were to be connected to the nerves that move muscles in a simpler way, the process of detecting the signal or the input and responding to it by an output action might be completed quickly. Such a connection is commonly called a reflex arc (Fig. 7.2). Where should such reflex arc connections be made between the input nerve and the output nerve? The best place, of course, would be at the point where they first meet each other. Nerves from all over the body meet in a bundle in the spinal cord on their way to the brain. Reflex arcs are formed in this spinal cord itself, although the information input also goes on to reach the brain. Of course, reflex arcs have evolved in animals because the thinking process of the brain is not fast enough. In fact many animals have very little or none of the complex neuron network needed for thinking. So it is quite likely that reflex arcs have evolved as efficient ways of functioning in the absence of true thought processes. However, even after complex neuron networks have come into existence, reflex arcs continue to be more efficient for quick responses.\n\nIs reflex action the only function of the spinal cord? Obviously not, since we know that we are thinking beings. Spinal cord is made up of nerves which supply information to think about. Thinking involves more complex mechanisms and neural connections. These are concentrated in the brain, which is the main coordinating centre of the body. The brain and spinal cord constitute the central nervous system. They receive information from all parts of the body and integrate it.\n\nWe also think about our actions. Writing, talking, moving a chair, clapping at the end of a programme are examples of voluntary actions which are based on deciding what to do next. So, the brain also has to send messages to muscles. This is the second way in which the nervous system communicates with the muscles. The communication between the central nervous system and the other parts of the body is facilitated by the peripheral nervous system consisting of cranial nerves arising from the brain and spinal nerves arising from the spinal cord. The brain thus allows us to think and take actions based on that thinking. As you will expect, this is accomplished through a complex design, with different parts of the brain responsible for integrating different inputs and outputs. The brain has three such major parts or regions, namely the fore-brain, mid-brain and hind-brain.", "doc_id": "f54885c4-4e17-11ed-80bf-0242ac110007"} {"source": "NCERT X Science, India", "document": "Animals have a nervous system for controlling and coordinating the activities of the body. But plants have neither a nervous system nor muscles. So, how do they respond to stimuli? When we touch the leaves of a chhui-mui (the \\u2018touch-me-not\\u2019 plant of the Mimosa family), they begin to fold up and droop. When a seed germinates, the root goes down, the stem comes up into the air.\n\nThere is no growth involved in this movement. On the other hand, the directional movement of a seedling is caused by growth. If it is prevented from growing, it will not show any movement. So plants show two different types of movement \\u2013 one dependent on growth and the other independent of growth.\n\nLet us think about the first kind of movement, such as that of the sensitive plant. Since no growth is involved, the plant must actually move its leaves in response to touch. But there is no nervous tissue, nor any muscle tissue. How does the plant detect the touch, and how do the leaves move in response?\n\nIf we think about where exactly the plant is touched, and what part of the plant actually moves, it is apparent that movement happens at a point different from the point of touch. So, information that a touch has occurred must be communicated. The plants also use electrical-chemical means to convey this information from cell to cell, but unlike in animals, there is no specialised tissue in plants for the conduction of information. Finally, again as in animals, some cells must change shape in order for movement to happen. Instead of the specialised proteins found in animal muscle cells, plant cells change shape by changing the amount of water in them, resulting in swelling or shrinking, and therefore in changing shapes.\n\nSome plants like the pea plant climb up other plants or fences by means of tendrils. These tendrils are sensitive to touch. When they come in contact with any support, the part of the tendril in contact with the object does not grow as rapidly as the part of the tendril away from the object. This causes the tendril to circle around the object and thus cling to it. More commonly, plants respond to stimuli slowly by growing in a particular direction. Because this growth is directional, it appears as if the plant is moving. Let us understand this type of movement with the help of an example.\n\nEnvironmental triggers such as light, or gravity will change the directions that plant parts grow in. These directional, or tropic, movements can be either towards the stimulus, or away from it. So, in two different kinds of phototropic movement, shoots respond by bending towards light while roots respond by bending away from it. How does this help the plant?\n\nPlants show tropism in response to other stimuli as well. The roots of a plant always grow downwards while the shoots usually grow upwards and away from the earth. This upward and downward growth of shoots and roots, respectively, in response to the pull of earth or gravity is, obviously, geotropism.\n\nLet us now once again think about how information is communicated in the bodies of multicellular organisms. The movement of the sensitive plant in response to touch is very quick. The movement of sunflowers in response to day or night, on the other hand, is quite slow. Growth-related movement of plants will be even slower. Even in animal bodies, there are carefully controlled directions to growth. Our arms and fingers grow in certain directions, not haphazardly. So controlled movements can be either slow or fast. If fast responses to stimuli are to be made, information transfer must happen very quickly. For this, the medium of transmission must be able to move rapidly.\n\nElectrical impulses are an excellent means for this. But there are limitations to the use of electrical impulses. Firstly, they will reach only those cells that are connected by nervous tissue, not each and every cell in the animal body. Secondly, once an electrical impulse is generated in a cell and transmitted, the cell will take some time to reset its mechanisms before it can generate and transmit a new impulse. In other words, cells cannot continually create and transmit electrical impulses. It is thus no wonder that most multicellular organisms use another means of communication between cells, namely, chemical communication.\n\nIf, instead of generating an electrical impulse, stimulated cells release a chemical compound, this compound would diffuse all around the original cell. If other cells around have the means to detect this compound using special molecules on their surfaces, then they would be able to recognise information, and even transmit it. This will be slower, of course, but it can potentially reach all cells of the body, regardless of nervous connections, and it can be done steadily and persistently. These compounds, or hormones used by multicellular organisms for control and coordination show a great deal of diversity, as we would expect. Different plant hormones help to coordinate growth, development and responses to the environment. They are synthesised at places away from where they act and simply diffuse to the area of action.\n\nLet us take an example that we have worked with earlier. When growing plants detect light, a hormone called auxin, synthesised at the shoot tip, helps the cells to grow longer. When light is coming from one side of the plant, auxin diffuses towards the shady side of the shoot. This concentration of auxin stimulates the cells to grow longer on the side of the shoot which is away from light. Thus, the plant appears to bend towards light.\n\nAnother example of plant hormones are gibberellins which, like auxins, help in the growth of the stem. Cytokinins promote cell division, and it is natural then that they are present in greater concentration in areas of rapid cell division, such as in fruits and seeds. These are examples of plant hormones that help in promoting growth. But plants also need signals to stop growing. Abscisic acid is one example of a hormone which inhibits growth. Its effects include wilting of leaves.", "doc_id": "5c341c3a-4e18-11ed-a348-0242ac110007"} {"source": "NCERT X Science, India", "document": "Concave mirrors are commonly used in torches, search-lights and vehicles headlights to get powerful parallel beams of light. They are often used as shaving mirrors to see a larger image of the face. The dentists use concave mirrors to see large images of the teeth of patients. Large concave mirrors are used to concentrate sunlight to produce heat in solar furnaces. We studied the image formation by a concave mirror. Now we shall study the formation of image by a convex mirror.\n\nWe consider two positions of the object for studying the image formed by a convex mirror. First is when the object is at infinity and the second position is when the object is at a finite distance from the mirror. The ray diagrams for the formation of image by a convex mirror for these two positions of the object are shown in Fig.10.8 (a) and (b), respectively. The results are summarised in Table 10.2.\n\nYou can see a full-length image of a tall building/tree in a small convex mirror. One such mirror is fitted in a wall of Agra Fort facing Taj Mahal. If you visit the Agra Fort, try to observe the full image of Taj Mahal. To view distinctly, you should stand suitably at the terrace adjoining the wall.\n\nConvex mirrors are commonly used as rear-view (wing) mirrors in vehicles. These mirrors are fitted on the sides of the vehicle, enabling the driver to see traffic behind him/her to facilitate safe driving. Convex mirrors are preferred because they always give an erect, though diminished, image. Also, they have a wider field of view as they are curved outwards. Thus, convex mirrors enable the driver to view much larger area than would be possible with a plane mirror.\n\nWhile dealing with the reflection of light by spherical mirrors, we shall follow a set of sign conventions called the New Cartesian Sign Convention. In this convention, the pole (P) of the mirror is taken as the origin (Fig. 10.9). The principal axis of the mirror is taken as the x-axis (X\u2019X) of the coordinate system. The conventions are as follows \u2013\n(i) The object is always placed to the left of the mirror. This implies that the light from the object falls on the mirror from the left-hand side.\n(ii) All distances parallel to the principal axis are measured from the pole of the mirror.\n(iii) All the distances measured to the right of the origin (along + x-axis) are taken as positive while those measured to the left of the origin (along \u2013 x-axis) are taken as negative.\n(iv) Distances measured perpendicular to and above the principal axis (along + y-axis) are taken as positive.\n(v) Distances measured perpendicular to and below the principal axis (along \u2013y-axis) are taken as negative.\n\nThe New Cartesian Sign Convention described above is illustrated in Fig.10.9 for your reference. These sign conventions are applied to obtain the mirror formula and solve related numerical problems.\n\nIn a spherical mirror, the distance of the object from its pole is called the object distance (u). The distance of the image from the pole of the mirror is called the image distance (v). You already know that the distance of the principal focus from the pole is called the focal length (f). There is a relationship between these three quantities given by the mirror formula.\n\nThis formula is valid in all situations for all spherical mirrors for all positions of the object. You must use the New Cartesian Sign Convention while substituting numerical values for u, v, f, and R in the mirror formula for solving problems.\n\nMagnification produced by a spherical mirror gives the relative extent to which the image of an object is magnified with respect to the object size. It is expressed as the ratio of the height of the image to the height of the object. It is usually represented by the letter m.\n\nYou may note that the height of the object is taken to be positive as the object is usually placed above the principal axis. The height of the image should be taken as positive for virtual images. However, it is to be taken as negative for real images. A negative sign in the value of the magnification indicates that the image is real. A positive sign in the value of the magnification indicates that the image is virtual.", "doc_id": "524a770e-4e19-11ed-9dcf-0242ac110007"} {"source": "NCERT X Science, India", "document": "The following are the laws of refraction of light.\n (i) The incident ray, the refracted ray and the normal to the interface of two transparent media at the point of incidence, all lie in the same plane.\n(ii) The ratio of sine of angle of incidence to the sine of angle of refraction is a constant, for the light of a given colour and for the given pair of media. This law is also known as Snell\u2019s law of refraction. This constant value is called the refractive index of the second medium with respect to the first. Let us study about refractive index in some detail.\n\nYou have already studied that a ray of light that travels obliquely from one transparent medium into another will change its direction in the second medium. The extent of the change in direction that takes place in a given pair of media may be expressed in terms of the refractive index, the \u201cconstant\u201d appearing on the right-hand side of Eq.(10.4).\n\nThe refractive index can be linked to an important physical quantity, the relative speed of propagation of light in different media. It turns out that light propagates with different speeds in different media. Light travels fastest in vacuum with speed of 3\u00d7108 metres per second. In air, the speed of light is only marginally less, compared to that in vacuum. It reduces considerably in glass or water. The value of the refractive index for a given pair of media depends upon the speed of light in the two media, as given below. If medium 1 is vacuum or air, then the refractive index of medium 2 is considered with respect to vacuum. This is called the absolute refractive index of the medium.\n\nThe absolute refractive index of a medium is simply called its refractive index. The refractive index of several media is given in Table 10.3. From the Table you can know that the refractive index of water = 1.33. This means that the ratio of the speed of light in air and the speed of light in water is equal to 1.33. Similarly, the refractive index of crown glass = 1.52. Such data are helpful in many places. However, you need not memorise the data.\n\nThe ability of a medium to refract light is also expressed in terms of its optical density. Optical density has a definite connotation. It is not the same as mass density. We have been using the terms \u2018rarer medium\u2019 and \u2018denser medium\u2019 in this Chapter. It actually means \u2018optically rarer medium\u2019 and \u2018optically denser medium\u2019, respectively. When can we say that a medium is optically denser than the other? In comparing two media, the one with the larger refractive index is optically denser medium than the other. The other medium of lower refractive index is optically rarer. The speed of light is higher in a rarer medium than a denser medium. Thus, a ray of light travelling from a rarer medium to a denser medium slows down and bends towards the normal. When it travels from a denser medium to a rarer medium, it speeds up and bends away from the normal. For example, water is an optically denser medium as compared to air.\n\nYou might have seen watchmakers using a small magnifying glass to see tiny parts. Have you ever touched the surface of a magnifying glass with your hand? Is it plane surface or curved? Is it thicker in the middle or at the edges? The glasses used in spectacles and that by a watchmaker are examples of lenses. What is a lens? How does it bend light rays? We shall discuss these in this section.\n\nA transparent material bound by two surfaces, of which one or both surfaces are spherical, forms a lens. This means that a lens is bound by at least one spherical surface. In such lenses, the other surface would be plane. A lens may have two spherical surfaces, bulging outwards. Such a lens is called a double convex lens. It is simply called a convex lens. It is thicker at the middle as compared to the edges. Convex lens converges light rays as shown in Fig. 10.12 (a). Hence convex lenses are also called converging lenses. Similarly, a double concave lens is bounded by two spherical surfaces, curved inwards. It is thicker at the edges than at the middle. Such lenses diverge light rays as shown in Fig. 10.12 (b). Such lenses are also called diverging lenses. A double concave lens is simply called a concave lens.\n\nA lens, either a convex lens or a concave lens, has two spherical surfaces. Each of these surfaces forms a part of a sphere. The centres of these spheres are called centres of curvature of the lens.The centre of curvature of a lens is usually represented by the letter C. Since there are two centres of curvature, we may represent them as C1 and C2. An imaginary straight line passing through the two centres of curvature of a lens is called its principal axis. The central point of a lens is its optical centre. It is usually represented by the letter O. A ray of light through the optical centre of a lens passes without suffering any deviation. The effective diameter of the circular outline of a spherical lens is called its aperture. We shall confine our discussion in this Chapter to such lenses whose aperture is much less than its radius of curvature and the two centres of curvatures are equidistant from the optical centre O. Such lenses are called thin lenses with small apertures.", "doc_id": "987d89bc-4e1b-11ed-bcb3-0242ac110007"} {"source": "NCERT X Science, India", "document": "The twinkling of a star is due to atmospheric refraction of starlight. The starlight, on entering the earth\u2019s atmosphere, undergoes refraction continuously before it reaches the earth. The atmospheric refraction occurs in a medium of gradually changing refractive index. Since the atmosphere bends starlight towards the normal, the apparent position of the star is slightly different from its actual position. The star appears slightly higher (above) than its actual position when viewed near the horizon (Fig. 11.9). Further, this apparent position of the star is not stationary, but keeps on changing slightly, since the physical conditions of the earth\u2019s atmosphere are not stationary, as was the case in the previous paragraph. Since the stars are very distant, they approximate point-sized sources of light. As the path of rays of light coming from the star goes on varying slightly, the apparent position of the star fluctuates and the amount of starlight entering the eye flickers \u2013 the star sometimes appears brighter, and at some other time, fainter, which is the twinkling effect. Why don\u2019t the planets twinkle? The planets are much closer to the earth, and are thus seen as extended sources. If we consider a planet as a collection of a large number of point-sized sources of light, the total variation in the amount of light entering our eye from all the individual point-sized sources will average out to zero, thereby nullifying the twinkling effect.\n\nThe Sun is visible to us about 2 minutes before the actual sunrise, and about 2 minutes after the actual sunset because of atmospheric refraction. By actual sunrise, we mean the actual crossing of the horizon by the Sun. Fig. 11.10 shows the actual and apparent positions of the Sun with respect to the horizon. The time difference between actual sunset and the apparent sunset is about 2 minutes. The apparent flattening of the Sun\u2019s disc at sunrise and sunset is also due to the same phenomenon.\n\nThe interplay of light with objects around us gives rise to several spectacular phenomena in nature. The blue colour of the sky, colour of water in deep sea, the reddening of the sun at sunrise and the sunset are some of the wonderful phenomena we are familiar with. In the previous class, you have learnt about the scattering of light by colloidal particles. The path of a beam of light passing through a true solution is not visible. However, its path becomes visible through a colloidal solution where the size of the particles is relatively larger.\n\nThe earth\u2019s atmosphere is a heterogeneous mixture of minute particles. These particles include smoke, tiny water droplets, suspended particles of dust and molecules of air. When a beam of light strikes such fine particles, the path of the beam becomes visible. The light reaches us, after being reflected diffusely by these particles. The phenomenon of scattering of light by the colloidal particles gives rise to Tyndall effect which you have studied in Class IX. This phenomenon is seen when a fine beam of sunlight enters a smoke-filled room through a small hole. Thus, scattering of light makes the particles visible. Tyndall effect can also be observed when sunlight passes through a canopy of a dense forest. Here, tiny water droplets in the mist scatter light. The colour of the scattered light depends on the size of the scattering particles. Very fine particles scatter mainly blue light while particles of larger size scatter light of longer wavelengths. If the size of the scattering particles is large enough, then, the scattered light may even appear white.\n\nThe molecules of air and other fine particles in the atmosphere have size smaller than the wavelength of visible light. These are more effective in scattering light of shorter wavelengths at the blue end than light of longer wavelengths at the red end. The red light has a wavelength about 1.8 times greater than blue light. Thus, when sunlight passes through the atmosphere, the fine particles in air scatter the blue colour (shorter wavelengths) more strongly than red. The scattered blue light enters our eyes. If the earth had no atmosphere, there would not have been any scattering. Then, the sky would have looked dark. The sky appears dark to passengers flying at very high altitudes, as scattering is not prominent at such heights. You might have observed that \u2018danger\u2019 signal lights are red in colour. Do you know why? The red is least scattered by fog or smoke. Therefore, it can be seen in the same colour at a distance.\n\nHave you seen the sky and the Sun at sunset or sunrise? Have you wondered as to why the Sun and the surrounding sky appear red? Let us do an activity to understand the blue colour of the sky and the reddish appearance of the Sun at the sunrise or sunset.\n\nYou will find fine microscopic sulphur particles precipitating in about 2 to 3 minutes. As the sulphur particles begin to form, you can observe the blue light from the three sides of the glass tank. This is due to scattering of short wavelengths by minute colloidal sulphur particles. Observe the colour of the transmitted light from the fourth side of the glass tank facing the circular hole. It is interesting to observe at first the orange red colour and then bright crimson red colour on the screen.\n\nThis activity demonstrates the scattering of light that helps you to understand the bluish colour of the sky and the reddish appearance of the Sun at the sunrise or the sunset. Light from the Sun near the horizon passes through thicker layers of air and larger distance in the earth\u2019s atmosphere before reaching our eyes. However, light from the Sun overhead would travel relatively shorter distance. At noon, the Sun appears white as only a little of the blue and violet colours are scattered. Near the horizon, most of the blue light and shorter wavelengths are scattered away by the particles. Therefore, the light that reaches our eyes is of longer wavelengths. This gives rise to the reddish appearance of the Sun.", "doc_id": "78727b60-4e3d-11ed-a9c1-0242ac110007"} {"source": "NCERT X Science, India", "document": "The twinkling of a star is due to atmospheric refraction of starlight. The starlight, on entering the earth\u2019s atmosphere, undergoes refraction continuously before it reaches the earth. The atmospheric refraction occurs in a medium of gradually changing refractive index. Since the atmosphere bends starlight towards the normal, the apparent position of the star is slightly different from its actual position. The star appears slightly higher (above) than its actual position when viewed near the horizon (Fig. 11.9). Further, this apparent position of the star is not stationary, but keeps on changing slightly, since the physical conditions of the earth\u2019s atmosphere are not stationary, as was the case in the previous paragraph. Since the stars are very distant, they approximate point-sized sources of light. As the path of rays of light coming from the star goes on varying slightly, the apparent position of the star fluctuates and the amount of starlight entering the eye flickers \u2013 the star sometimes appears brighter, and at some other time, fainter, which is the twinkling effect. Why don\u2019t the planets twinkle? The planets are much closer to the earth, and are thus seen as extended sources. If we consider a planet as a collection of a large number of point-sized sources of light, the total variation in the amount of light entering our eye from all the individual point-sized sources will average out to zero, thereby nullifying the twinkling effect.\n\nThe Sun is visible to us about 2 minutes before the actual sunrise, and about 2 minutes after the actual sunset because of atmospheric refraction. By actual sunrise, we mean the actual crossing of the horizon by the Sun. Fig. 11.10 shows the actual and apparent positions of the Sun with respect to the horizon. The time difference between actual sunset and the apparent sunset is about 2 minutes. The apparent flattening of the Sun\u2019s disc at sunrise and sunset is also due to the same phenomenon.\n\nThe interplay of light with objects around us gives rise to several spectacular phenomena in nature. The blue colour of the sky, colour of water in deep sea, the reddening of the sun at sunrise and the sunset are some of the wonderful phenomena we are familiar with. In the previous class, you have learnt about the scattering of light by colloidal particles. The path of a beam of light passing through a true solution is not visible. However, its path becomes visible through a colloidal solution where the size of the particles is relatively larger.\n\nThe earth\u2019s atmosphere is a heterogeneous mixture of minute particles. These particles include smoke, tiny water droplets, suspended particles of dust and molecules of air. When a beam of light strikes such fine particles, the path of the beam becomes visible. The light reaches us, after being reflected diffusely by these particles. The phenomenon of scattering of light by the colloidal particles gives rise to Tyndall effect which you have studied in Class IX. This phenomenon is seen when a fine beam of sunlight enters a smoke-filled room through a small hole. Thus, scattering of light makes the particles visible. Tyndall effect can also be observed when sunlight passes through a canopy of a dense forest. Here, tiny water droplets in the mist scatter light. The colour of the scattered light depends on the size of the scattering particles. Very fine particles scatter mainly blue light while particles of larger size scatter light of longer wavelengths. If the size of the scattering particles is large enough, then, the scattered light may even appear white.\n\nThe molecules of air and other fine particles in the atmosphere have size smaller than the wavelength of visible light. These are more effective in scattering light of shorter wavelengths at the blue end than light of longer wavelengths at the red end. The red light has a wavelength about 1.8 times greater than blue light. Thus, when sunlight passes through the atmosphere, the fine particles in air scatter the blue colour (shorter wavelengths) more strongly than red. The scattered blue light enters our eyes. If the earth had no atmosphere, there would not have been any scattering. Then, the sky would have looked dark. The sky appears dark to passengers flying at very high altitudes, as scattering is not prominent at such heights. You might have observed that \u2018danger\u2019 signal lights are red in colour. Do you know why? The red is least scattered by fog or smoke. Therefore, it can be seen in the same colour at a distance.\n\nHave you seen the sky and the Sun at sunset or sunrise? Have you wondered as to why the Sun and the surrounding sky appear red? Let us do an activity to understand the blue colour of the sky and the reddish appearance of the Sun at the sunrise or sunset.\n\nYou will find fine microscopic sulphur particles precipitating in about 2 to 3 minutes. As the sulphur particles begin to form, you can observe the blue light from the three sides of the glass tank. This is due to scattering of short wavelengths by minute colloidal sulphur particles. Observe the colour of the transmitted light from the fourth side of the glass tank facing the circular hole. It is interesting to observe at first the orange red colour and then bright crimson red colour on the screen.\n\nThis activity demonstrates the scattering of light that helps you to understand the bluish colour of the sky and the reddish appearance of the Sun at the sunrise or the sunset. Light from the Sun near the horizon passes through thicker layers of air and larger distance in the earth\u2019s atmosphere before reaching our eyes. However, light from the Sun overhead would travel relatively shorter distance. At noon, the Sun appears white as only a little of the blue and violet colours are scattered. Near the horizon, most of the blue light and shorter wavelengths are scattered away by the particles. Therefore, the light that reaches our eyes is of longer wavelengths. This gives rise to the reddish appearance of the Sun.", "doc_id": "d346e480-4e3e-11ed-9ce4-0242ac110007"} {"source": "NCERT X Science, India", "document": "How does a metal conduct electricity? You would think that a low-energy electron would have great difficulty passing through a solid conductor. Inside the solid, the atoms are packed together with very little spacing between them. But it turns out that the electrons are able to \u2018travel\u2019 through a perfect solid crystal smoothly and easily, almost as if they were in a vacuum. The \u2018motion\u2019 of electrons in a conductor, however, is very different from that of charges in empty space. When a steady current flows through a conductor, the electrons in it move with a certain average \u2018drift speed\u2019. One can calculate this drift speed of electrons for a typical copper wire carrying a small current, and it is found to be actually very small, of the order of 1 mm per second. How is it then that an electric bulb lights up as soon as we turn the switch on? It cannot be that a current starts only when an electron from one terminal of the electric supply physically reaches the other terminal through the bulb, because the physical drift of electrons in the conducting wires is a very slow process. The exact mechanism of the current flow, which takes place with a speed close to the speed of light, is fascinating, but it is beyond the scope of this book. Do you feel like probing this question at an advanced level?\n\nIs there a relationship between the potential difference across a conductor and the current through it? In this Activity, you will find that approximately the same value for V/I is obtained in each case. Thus the V\u2013I graph is a straight line that passes through the origin of the graph, as shown in Fig. 12.3. Thus, V/I is a constant ratio. In 1827, a German physicist Georg Simon Ohm (1787\u20131854) found out the relationship between the current I, flowing in a metallic wire and the potential difference across its terminals. The potential difference, V, across the ends of a given metallic wire in an electric circuit is directly proportional to the current flowing through it, provided its temperature remains the same.\n\nIn Eq. (12.4), R is a constant for the given metallic wire at a given temperature and is called its resistance. It is the property of a conductor to resist the flow of charges through it. Its SI unit is ohm, represented by the Greek letter omega. If the potential difference across the two ends of a conductor is 1 V and the current through it is 1 A, then the resistance R, of the conductor is 1 ohm. \n\nIt is obvious from Eq. (12.7) that the current through a resistor is inversely proportional to its resistance. If the resistance is doubled the current gets halved. In many practical cases it is necessary to increase or decrease the current in an electric circuit. A component used to regulate current without changing the voltage source is called variable resistance. In an electric circuit, a device called rheostat is often used to change the resistance in the circuit.\n\nIn this Activity we observe that the current is different for different components. Why do they differ? Certain components offer an easy path for the flow of electric current while the others resist the flow. We know that motion of electrons in an electric circuit constitutes an electric current. The electrons, however, are not completely free to move within a conductor. They are restrained by the attraction of the atoms among which they move. Thus, motion of electrons through a conductor is retarded by its resistance. A component of a given size that offers a low resistance is a good conductor. A conductor having some appreciable resistance is called a resistor. A component of identical size that offers a higher resistance is a poor conductor. An insulator of the same size offers even higher resistance.\n\nIt is observed that the ammeter reading decreases to one-half when the length of the wire is doubled. The ammeter reading is increased when a thicker wire of the same material and of the same length is used in the circuit. A change in ammeter reading is observed when a wire of different material of the same length and the same area of cross-section is used. On applying Ohm\u2019s law, we observe that the resistance of the conductor depends (i) on its length, (ii) on its area of cross-section, and (iii) on the nature of its material. Precise measurements have shown that resistance of a uniform metallic conductor is directly proportional to its length (l) and inversely proportional to the area of cross-section (A).", "doc_id": "c6f917be-4e40-11ed-8661-0242ac110007"} {"source": "NCERT X Science, India", "document": "In preceding sections, we learnt about some simple electric circuits. We have noticed how the current through a conductor depends upon its resistance and the potential difference across its ends. In various electrical gadgets, we often use resistors in various combinations. We now therefore intend to see how Ohm\u2019s law can be applied to combinations of resistors. There are two methods of joining the resistors together. Figure 12.6 shows an electric circuit in which three resistors having resistances R1, R2 and R3, respectively, are joined end to end. Here the resistors are said to be connected in series.\n\nWhat happens to the value of current when a number of resistors are connected in series in a circuit? What would be their equivalent resistance? Let us try to understand these with the help of the following activities.\n\nYou will observe that the value of the current in the ammeter is the same, independent of its position in the electric circuit. It means that in a series combination of resistors the current is the same in every part of the circuit or the same current through each resistor.\n\nYou will observe that the potential difference V is equal to the sum of potential differences V1, V2, and V3. That is the total potential difference across a combination of resistors in series is equal to the sum of potential difference across the individual resistors.\n\nIn the electric circuit shown in Fig. 12.8, let I be the current through the circuit. The current through each resistor is also I. It is possible to replace the three resistors joined in series by an equivalent single resistor of resistance R, such that the potential difference V across it, and the current I through the circuit remains the same.\n\nNow, let us consider the arrangement of three resistors joined in parallel with a combination of cells (or a battery). We have seen that in a series circuit the current is constant throughout the electric circuit. Thus it is obviously impracticable to connect an electric bulb and an electric heater in series, because they need currents of widely different values to operate properly (see Example 12.3). Another major disadvantage of a series circuit is that when one component fails the circuit is broken and none of the components works. If you have used \u2018fairy lights\u2019 to decorate buildings on festivals, on marriage celebrations etc., you might have seen the electrician spending lot of time in trouble-locating and replacing the \u2018dead\u2019 bulb \u2013 each has to be tested to find which has fused or gone. On the other hand, a parallel circuit divides the current through the electrical gadgets. The total resistance in a parallel circuit is decreased as per Eq. (12.18). This is helpful particularly when each gadget has different resistance and requires different current to operate properly.\n\nWe know that a battery or a cell is a source of electrical energy. The chemical reaction within the cell generates the potential difference between its two terminals that sets the electrons in motion to flow the current through a resistor or a system of resistors connected to the battery. We have also seen, in Section 12.2, that to maintain the current, the source has to keep expending its energy. Where does this energy go? A part of the source energy in maintaining the current may be consumed into useful work (like in rotating the blades of an electric fan). Rest of the source energy may be expended in heat to raise the temperature of gadget. We often observe this in our everyday life. For example, an electric fan becomes warm if used continuously for longer time etc. On the other hand, if the electric circuit is purely resistive, that is, a configuration of resistors only connected to a battery; the source energy continually gets dissipated entirely in the form of heat. This is known as the heating effect of electric current. This effect is utilised in devices such as electric heater, electric iron etc.\n\nConsider a current I flowing through a resistor of resistance R. Let the potential difference across it be V (Fig. 12.13). Let t be the time during which a charge Q flows across. The work done in moving the charge Q through a potential difference V is VQ. Therefore, the source must supply energy equal to VQ in time t.\n\nOr the energy supplied to the circuit by the source in time t is P \u00d7 t, that is, VIt. What happens to this energy expended by the source? This energy gets dissipated in the resistor as heat.\n\nThis is known as Joule\u2019s law of heating. The law implies that heat produced in a resistor is (i) directly proportional to the square of current for a given resistance, (ii) directly proportional to resistance for a given current, and (iii) directly proportional to the time for which the current flows through the resistor. In practical situations, when an electric appliance is connected to a known voltage source, Eq. (12.21) is used after calculating the current through it, using the relation I = V/R.", "doc_id": "b7e3f8ec-4ea0-11ed-a299-0242ac110007"} {"source": "NCERT X Science, India", "document": "In ancient times, wood was the most common source of heat energy. The energy of flowing water and wind was also used for limited activities. Can you think of some of these uses? The exploitation of coal as a source of energy made the industrial revolution possible. Increasing industrialisation has led to a better quality of life all over the world. It has also caused the global demand for energy to grow at a tremendous rate. \n\nThe growing demand for energy was largely met by the fossil fuels \u2013 coal and petroleum. Our technologies were also developed for using these energy sources. But these fuels were formed over millions of years ago and there are only limited reserves. The fossil fuels are non-renewable sources of energy, so we need to conserve them. If we were to continue consuming these sources at such alarming rates, we would soon run out of energy! In order to avoid this, alternate sources of energy were explored. But we continue to be largely dependent on fossil fuels for most of our energy requirements (Fig. 14.1). \n\nBurning fossil fuels has other disadvantages too. We learnt about the air pollution caused by burning of coal or petroleum products. The oxides of carbon, nitrogen and sulphur that are released on burning fossil fuels are acidic oxides. These lead to acid rain which affects our water and soil resources. In addition to the problem of air pollution, recall the green-house effect of gases like carbon dioxide.\n\nThe pollution caused by burning fossil fuels can be somewhat reduced by increasing the efficiency of the combustion process and using various techniques to reduce the escape of harmful gases and ashes into the surroundings. Besides being used directly for various purposes \u2013 in gas stoves and vehicles, do you know fossil fuels are the major fuels used for generating electricity? Let us produce some electricity at our own small plant in the class and see what goes into producing our favourite form of energy.\n\nThis is our turbine for generating electricity. The simplest turbines have one moving part, a rotor-blade assembly. The moving fluid acts on the blades to spin them and impart energy to the rotor. Thus, we see that basically we need to move the fan, the rotor blade, with speed which would turn the shaft of the dynamo and convert the mechanical energy into electrical energy \u2014 the form of energy which has become a necessity in today\u2019s scenario. The various ways in which this can be done depends upon availability of the resources. We will see how various sources of energy can be harnessed to run the turbine and generate electricity in the following sections.\n\nLarge amount of fossil fuels are burnt every day in power stations to heat up water to produce steam which further runs the turbine to generate electricity. The transmission of electricity is more efficient than transporting coal or petroleum over the same distance. Therefore, many thermal power plants are set up near coal or oil fields. The term thermal power plant is used since fuel is burnt to produce heat energy which is converted into electrical energy.\n\nDue to geological changes, molten rocks formed in the deeper hot regions of earth\u2019s crust are pushed upward and trapped in certain regions called \u2018hot spots\u2019. When underground water comes in contact with the hot spot, steam is generated. Sometimes hot water from that region finds outlets at the surface. Such outlets are known as hot springs. The steam trapped in rocks is routed through a pipe to a turbine and used to generate electricity. The cost of production would not be much, but there are very few commercially viable sites where such energy can be exploited. There are number of power plants based on geothermal energy operational in New Zealand and United States of America.\n\nHow is nuclear energy generated? In a process called nuclear fission, the nucleus of a heavy atom (such as uranium, plutonium or thorium), when bombarded with low-energy neutrons, can be split apart into lighter nuclei. When this is done, a tremendous amount of energy is released if the mass of the original nucleus is just a little more than the sum of the masses of the individual products. The fission of an atom of uranium, for example, produces 10 million times the energy produced by the combustion of an atom of carbon from coal. In a nuclear reactor designed for electric power generation, such nuclear \u2018fuel\u2019 can be part of a self\u0002sustaining fission chain reaction that releases energy at a controlled rate. The released energy can be used to produce steam and further generate electricity.\n\nThe major hazard of nuclear power generation is the storage and disposal of spent or used fuels \u2013 the uranium still decaying into harmful subatomic particles (radiations). Improper nuclear-waste storage and disposal result in environmental contamination. Further, there is a risk of accidental leakage of nuclear radiation. The high cost of installation of a nuclear power plant, high risk of environmental contamination and limited availability of uranium makes large-scale use of nuclear energy prohibitive. Nuclear energy was first used for destructive purposes before nuclear power stations were designed. The fundamental physics of the fission chain reaction in a nuclear weapon is similar to the physics of a controlled nuclear reactor, but the two types of device are engineered quite differently.\n\nCurrently all commercial nuclear reactors are based on nuclear fission. But there is another possibility of nuclear energy generation by a safer process called nuclear fusion. Fusion means joining lighter nuclei to make a heavier nucleus, most commonly hydrogen or hydrogen isotopes to create helium, such as\n2H + 2H \u2192 3He (+ n)\nIt releases a tremendous amount of energy, according to the Einstein equation, as the mass of the product is little less than the sum of the masses of the original individual nuclei. Such nuclear fusion reactions are the source of energy in the Sun and other stars. It takes considerable energy to force the nuclei to fuse. The conditions needed for this process are extreme \u2013 millions of degrees of temperature and millions of pascals of pressure. The hydrogen bomb is based on thermonuclear fusion reaction. A nuclear bomb based on the fission of uranium or plutonium is placed at the core of the hydrogen bomb. This nuclear bomb is embedded in a substance which contains deuterium and lithium. When the nuclear bomb (based on fission) is detonated, the temperature of this substance is raised to 107 K in a few microseconds. The high temperature generates sufficient energy for the light nuclei to fuse and a devastating amount of energy is released.", "doc_id": "2372b5d4-4ea2-11ed-b266-0242ac110007"} {"source": "NCERT X Science, India", "document": "In ancient times, wood was the most common source of heat energy. The energy of flowing water and wind was also used for limited activities. Can you think of some of these uses? The exploitation of coal as a source of energy made the industrial revolution possible. Increasing industrialisation has led to a better quality of life all over the world. It has also caused the global demand for energy to grow at a tremendous rate. \n\nThe growing demand for energy was largely met by the fossil fuels \u2013 coal and petroleum. Our technologies were also developed for using these energy sources. But these fuels were formed over millions of years ago and there are only limited reserves. The fossil fuels are non-renewable sources of energy, so we need to conserve them. If we were to continue consuming these sources at such alarming rates, we would soon run out of energy! In order to avoid this, alternate sources of energy were explored. But we continue to be largely dependent on fossil fuels for most of our energy requirements (Fig. 14.1). \n\nBurning fossil fuels has other disadvantages too. We learnt about the air pollution caused by burning of coal or petroleum products. The oxides of carbon, nitrogen and sulphur that are released on burning fossil fuels are acidic oxides. These lead to acid rain which affects our water and soil resources. In addition to the problem of air pollution, recall the green-house effect of gases like carbon dioxide.\n\nThe pollution caused by burning fossil fuels can be somewhat reduced by increasing the efficiency of the combustion process and using various techniques to reduce the escape of harmful gases and ashes into the surroundings. Besides being used directly for various purposes \u2013 in gas stoves and vehicles, do you know fossil fuels are the major fuels used for generating electricity? Let us produce some electricity at our own small plant in the class and see what goes into producing our favourite form of energy.\n\nThis is our turbine for generating electricity. The simplest turbines have one moving part, a rotor-blade assembly. The moving fluid acts on the blades to spin them and impart energy to the rotor. Thus, we see that basically we need to move the fan, the rotor blade, with speed which would turn the shaft of the dynamo and convert the mechanical energy into electrical energy \u2014 the form of energy which has become a necessity in today\u2019s scenario. The various ways in which this can be done depends upon availability of the resources. We will see how various sources of energy can be harnessed to run the turbine and generate electricity in the following sections.\n\nLarge amount of fossil fuels are burnt every day in power stations to heat up water to produce steam which further runs the turbine to generate electricity. The transmission of electricity is more efficient than transporting coal or petroleum over the same distance. Therefore, many thermal power plants are set up near coal or oil fields. The term thermal power plant is used since fuel is burnt to produce heat energy which is converted into electrical energy.\n\nDue to geological changes, molten rocks formed in the deeper hot regions of earth\u2019s crust are pushed upward and trapped in certain regions called \u2018hot spots\u2019. When underground water comes in contact with the hot spot, steam is generated. Sometimes hot water from that region finds outlets at the surface. Such outlets are known as hot springs. The steam trapped in rocks is routed through a pipe to a turbine and used to generate electricity. The cost of production would not be much, but there are very few commercially viable sites where such energy can be exploited. There are number of power plants based on geothermal energy operational in New Zealand and United States of America.\n\nHow is nuclear energy generated? In a process called nuclear fission, the nucleus of a heavy atom (such as uranium, plutonium or thorium), when bombarded with low-energy neutrons, can be split apart into lighter nuclei. When this is done, a tremendous amount of energy is released if the mass of the original nucleus is just a little more than the sum of the masses of the individual products. The fission of an atom of uranium, for example, produces 10 million times the energy produced by the combustion of an atom of carbon from coal. In a nuclear reactor designed for electric power generation, such nuclear \u2018fuel\u2019 can be part of a self\u0002sustaining fission chain reaction that releases energy at a controlled rate. The released energy can be used to produce steam and further generate electricity.\n\nThe major hazard of nuclear power generation is the storage and disposal of spent or used fuels \u2013 the uranium still decaying into harmful subatomic particles (radiations). Improper nuclear-waste storage and disposal result in environmental contamination. Further, there is a risk of accidental leakage of nuclear radiation. The high cost of installation of a nuclear power plant, high risk of environmental contamination and limited availability of uranium makes large-scale use of nuclear energy prohibitive. Nuclear energy was first used for destructive purposes before nuclear power stations were designed. The fundamental physics of the fission chain reaction in a nuclear weapon is similar to the physics of a controlled nuclear reactor, but the two types of device are engineered quite differently.\n\nCurrently all commercial nuclear reactors are based on nuclear fission. But there is another possibility of nuclear energy generation by a safer process called nuclear fusion. Fusion means joining lighter nuclei to make a heavier nucleus, most commonly hydrogen or hydrogen isotopes to create helium, such as\n2H + 2H \u2192 3He (+ n)\nIt releases a tremendous amount of energy, according to the Einstein equation, as the mass of the product is little less than the sum of the masses of the original individual nuclei. Such nuclear fusion reactions are the source of energy in the Sun and other stars. It takes considerable energy to force the nuclei to fuse. The conditions needed for this process are extreme \u2013 millions of degrees of temperature and millions of pascals of pressure. The hydrogen bomb is based on thermonuclear fusion reaction. A nuclear bomb based on the fission of uranium or plutonium is placed at the core of the hydrogen bomb. This nuclear bomb is embedded in a substance which contains deuterium and lithium. When the nuclear bomb (based on fission) is detonated, the temperature of this substance is raised to 107 K in a few microseconds. The high temperature generates sufficient energy for the light nuclei to fuse and a devastating amount of energy is released.", "doc_id": "6bde4f90-4ea2-11ed-87e1-0242ac110007"} {"source": "NCERT X Science, India", "document": "In ancient times, wood was the most common source of heat energy. The energy of flowing water and wind was also used for limited activities. Can you think of some of these uses? The exploitation of coal as a source of energy made the industrial revolution possible. Increasing industrialisation has led to a better quality of life all over the world. It has also caused the global demand for energy to grow at a tremendous rate. \n\nThe growing demand for energy was largely met by the fossil fuels \u2013 coal and petroleum. Our technologies were also developed for using these energy sources. But these fuels were formed over millions of years ago and there are only limited reserves. The fossil fuels are non-renewable sources of energy, so we need to conserve them. If we were to continue consuming these sources at such alarming rates, we would soon run out of energy! In order to avoid this, alternate sources of energy were explored. But we continue to be largely dependent on fossil fuels for most of our energy requirements (Fig. 14.1). \n\nBurning fossil fuels has other disadvantages too. We learnt about the air pollution caused by burning of coal or petroleum products. The oxides of carbon, nitrogen and sulphur that are released on burning fossil fuels are acidic oxides. These lead to acid rain which affects our water and soil resources. In addition to the problem of air pollution, recall the green-house effect of gases like carbon dioxide.\n\nThe pollution caused by burning fossil fuels can be somewhat reduced by increasing the efficiency of the combustion process and using various techniques to reduce the escape of harmful gases and ashes into the surroundings. Besides being used directly for various purposes \u2013 in gas stoves and vehicles, do you know fossil fuels are the major fuels used for generating electricity? Let us produce some electricity at our own small plant in the class and see what goes into producing our favourite form of energy.\n\nThis is our turbine for generating electricity. The simplest turbines have one moving part, a rotor-blade assembly. The moving fluid acts on the blades to spin them and impart energy to the rotor. Thus, we see that basically we need to move the fan, the rotor blade, with speed which would turn the shaft of the dynamo and convert the mechanical energy into electrical energy \u2014 the form of energy which has become a necessity in today\u2019s scenario. The various ways in which this can be done depends upon availability of the resources. We will see how various sources of energy can be harnessed to run the turbine and generate electricity in the following sections.\n\nLarge amount of fossil fuels are burnt every day in power stations to heat up water to produce steam which further runs the turbine to generate electricity. The transmission of electricity is more efficient than transporting coal or petroleum over the same distance. Therefore, many thermal power plants are set up near coal or oil fields. The term thermal power plant is used since fuel is burnt to produce heat energy which is converted into electrical energy.\n\nDue to geological changes, molten rocks formed in the deeper hot regions of earth\u2019s crust are pushed upward and trapped in certain regions called \u2018hot spots\u2019. When underground water comes in contact with the hot spot, steam is generated. Sometimes hot water from that region finds outlets at the surface. Such outlets are known as hot springs. The steam trapped in rocks is routed through a pipe to a turbine and used to generate electricity. The cost of production would not be much, but there are very few commercially viable sites where such energy can be exploited. There are number of power plants based on geothermal energy operational in New Zealand and United States of America.\n\nHow is nuclear energy generated? In a process called nuclear fission, the nucleus of a heavy atom (such as uranium, plutonium or thorium), when bombarded with low-energy neutrons, can be split apart into lighter nuclei. When this is done, a tremendous amount of energy is released if the mass of the original nucleus is just a little more than the sum of the masses of the individual products. The fission of an atom of uranium, for example, produces 10 million times the energy produced by the combustion of an atom of carbon from coal. In a nuclear reactor designed for electric power generation, such nuclear \u2018fuel\u2019 can be part of a self\u0002sustaining fission chain reaction that releases energy at a controlled rate. The released energy can be used to produce steam and further generate electricity.\n\nThe major hazard of nuclear power generation is the storage and disposal of spent or used fuels \u2013 the uranium still decaying into harmful subatomic particles (radiations). Improper nuclear-waste storage and disposal result in environmental contamination. Further, there is a risk of accidental leakage of nuclear radiation. The high cost of installation of a nuclear power plant, high risk of environmental contamination and limited availability of uranium makes large-scale use of nuclear energy prohibitive. Nuclear energy was first used for destructive purposes before nuclear power stations were designed. The fundamental physics of the fission chain reaction in a nuclear weapon is similar to the physics of a controlled nuclear reactor, but the two types of device are engineered quite differently.\n\nCurrently all commercial nuclear reactors are based on nuclear fission. But there is another possibility of nuclear energy generation by a safer process called nuclear fusion. Fusion means joining lighter nuclei to make a heavier nucleus, most commonly hydrogen or hydrogen isotopes to create helium, such as\n2H + 2H \u2192 3He (+ n)\nIt releases a tremendous amount of energy, according to the Einstein equation, as the mass of the product is little less than the sum of the masses of the original individual nuclei. Such nuclear fusion reactions are the source of energy in the Sun and other stars. It takes considerable energy to force the nuclei to fuse. The conditions needed for this process are extreme \u2013 millions of degrees of temperature and millions of pascals of pressure. The hydrogen bomb is based on thermonuclear fusion reaction. A nuclear bomb based on the fission of uranium or plutonium is placed at the core of the hydrogen bomb. This nuclear bomb is embedded in a substance which contains deuterium and lithium. When the nuclear bomb (based on fission) is detonated, the temperature of this substance is raised to 107 K in a few microseconds. The high temperature generates sufficient energy for the light nuclei to fuse and a devastating amount of energy is released.", "doc_id": "7c7d3050-4ea2-11ed-8a39-0242ac110007"} {"source": "NCERT X Science, India", "document": "The green plants in a terrestrial ecosystem capture about 1% of the energy of sunlight that falls on their leaves and convert it into food energy.\nWhen green plants are eaten by primary consumers, a great deal of energy is lost as heat to the environment, some amount goes into digestion and in doing work and the rest goes towards growth and reproduction. An average of 10% of the food eaten is turned into its own body and made available for the next level of consumers.\nTherefore, 10% can be taken as the average value for the amount of organic matter that is present at each step and reaches the next level of consumers.\nSince so little energy is available for the next level of consumers, food chains generally consist of only three or four steps. The loss of energy at each step is so great that very little usable energy remains after four trophic levels.\nThere are generally a greater number of individuals at the lower trophic levels of an ecosystem, the greatest number is of the producers.\nThe length and complexity of food chains vary greatly. Each organism is generally eaten by two or more other kinds of organisms which in turn are eaten by several other organisms. So instead of a straight line food chain, the relationship can be shown as a series of branching lines called a food web (Fig. 15.3).\n\nFrom the energy flow diagram (Fig. 15.4), two things become clear. Firstly, the flow of energy is unidirectional. The energy that is captured by the autotrophs does not revert back to the solar input and the energy which passes to the herbivores does not come back to autotrophs. As it moves progressively through the various trophic levels it is no longer available to the previous level. Secondly, the energy available at each trophic level gets diminished progressively due to loss of energy at each level.\n\nAnother interesting aspect of food chain is how unknowingly some harmful chemicals enter our bodies through the food chain. You have read in Class IX how water gets polluted. One of the reasons is the use of several pesticides and other chemicals to protect our crops from diseases and pests. These chemicals are either washed down into the soil or into the water bodies. From the soil, these are absorbed by the plants along with water and minerals, and from the water bodies these are taken up by aquatic plants and animals. This is one of the ways in which they enter the food chain.\n\nAs these chemicals are not degradable, these get accumulated progressively at each trophic level. As human beings occupy the top level in any food chain, the maximum concentration of these chemicals get accumulated in our bodies. This phenomenon is known as biological magnification. This is the reason why our food grains such as wheat and rice, vegetables and fruits, and even meat, contain varying amounts of pesticide residues. They cannot always be removed by washing or other means.\n\nWe are an integral part of the environment. Changes in the environment affect us and our activities change the environment around us. We have already seen in Class IX how our activities pollute the environment. In this chapter, we shall be looking at two of the environmental problems in detail, that is, depletion of the ozone layer and waste disposal.\n\nOzone is a molecule formed by three atoms of oxygen. While Oxygen is essential for all aerobic forms of life, Ozone, is a deadly poison. However, at the higher levels of the atmosphere, ozone performs an essential function. It shields the surface of the earth from ultraviolet (UV) radiation from the Sun. This radiation is highly damaging to organisms, for example, it is known to cause skin cancer in human beings. Ozone at the higher levels of the atmosphere is a product of UV radiation acting on oxygen molecule. The higher energy UV radiations split apart some moleculer oxygen into free oxygen atoms.\n\nThe amount of ozone in the atmosphere began to drop sharply in the 1980s. This decrease has been linked to synthetic chemicals like chlorofluorocarbons (CFCs) which are used as refrigerants and in fire extinguishers. In 1987, the United Nations Environment Programme (UNEP) succeeded in forging an agreement to freeze CFC production at 1986 levels. It is now mandatory for all the manufacturing companies to make CFC-free refrigerators throughout the world.\n\nIn our daily activities, we generate a lot of material that are thrown away. What are some of these waste materials? What happens after we throw them away? Let us perform an activity to find answers to these questions.\n\nWe have seen in the chapter on \u2018Life Processes\u2019 that the food we eat is digested by various enzymes in our body. Have you ever wondered why the same enzyme does not break-down everything we eat? Enzymes are specific in their action, specific enzymes are needed for the break-down of a particular substance. That is why we will not get any energy if we try to eat coal! Because of this, many human-made materials like plastics will not be broken down by the action of bacteria or other saprophytes. These materials will be acted upon by physical processes like heat and pressure, but under the ambient conditions found in our environment, these persist for a long time.\n\nSubstances that are broken down by biological processes are said to be biodegradable. How many of the substances you buried were biodegradable? Substances that are not broken down in this manner are said to be non-biodegradable. These substances may be inert and simply persist in the environment for a long time or may harm the various members of the eco-system.", "doc_id": "4d58c79e-4ea7-11ed-aa54-0242ac110007"} {"source": "NCERT X Science, India", "document": "The green plants in a terrestrial ecosystem capture about 1% of the energy of sunlight that falls on their leaves and convert it into food energy.\nWhen green plants are eaten by primary consumers, a great deal of energy is lost as heat to the environment, some amount goes into digestion and in doing work and the rest goes towards growth and reproduction. An average of 10% of the food eaten is turned into its own body and made available for the next level of consumers.\nTherefore, 10% can be taken as the average value for the amount of organic matter that is present at each step and reaches the next level of consumers.\nSince so little energy is available for the next level of consumers, food chains generally consist of only three or four steps. The loss of energy at each step is so great that very little usable energy remains after four trophic levels.\nThere are generally a greater number of individuals at the lower trophic levels of an ecosystem, the greatest number is of the producers.\nThe length and complexity of food chains vary greatly. Each organism is generally eaten by two or more other kinds of organisms which in turn are eaten by several other organisms. So instead of a straight line food chain, the relationship can be shown as a series of branching lines called a food web (Fig. 15.3).\n\nFrom the energy flow diagram (Fig. 15.4), two things become clear. Firstly, the flow of energy is unidirectional. The energy that is captured by the autotrophs does not revert back to the solar input and the energy which passes to the herbivores does not come back to autotrophs. As it moves progressively through the various trophic levels it is no longer available to the previous level. Secondly, the energy available at each trophic level gets diminished progressively due to loss of energy at each level.\n\nAnother interesting aspect of food chain is how unknowingly some harmful chemicals enter our bodies through the food chain. You have read in Class IX how water gets polluted. One of the reasons is the use of several pesticides and other chemicals to protect our crops from diseases and pests. These chemicals are either washed down into the soil or into the water bodies. From the soil, these are absorbed by the plants along with water and minerals, and from the water bodies these are taken up by aquatic plants and animals. This is one of the ways in which they enter the food chain.\n\nAs these chemicals are not degradable, these get accumulated progressively at each trophic level. As human beings occupy the top level in any food chain, the maximum concentration of these chemicals get accumulated in our bodies. This phenomenon is known as biological magnification. This is the reason why our food grains such as wheat and rice, vegetables and fruits, and even meat, contain varying amounts of pesticide residues. They cannot always be removed by washing or other means.\n\nWe are an integral part of the environment. Changes in the environment affect us and our activities change the environment around us. We have already seen in Class IX how our activities pollute the environment. In this chapter, we shall be looking at two of the environmental problems in detail, that is, depletion of the ozone layer and waste disposal.\n\nOzone is a molecule formed by three atoms of oxygen. While Oxygen is essential for all aerobic forms of life, Ozone, is a deadly poison. However, at the higher levels of the atmosphere, ozone performs an essential function. It shields the surface of the earth from ultraviolet (UV) radiation from the Sun. This radiation is highly damaging to organisms, for example, it is known to cause skin cancer in human beings. Ozone at the higher levels of the atmosphere is a product of UV radiation acting on oxygen molecule. The higher energy UV radiations split apart some moleculer oxygen into free oxygen atoms.\n\nThe amount of ozone in the atmosphere began to drop sharply in the 1980s. This decrease has been linked to synthetic chemicals like chlorofluorocarbons (CFCs) which are used as refrigerants and in fire extinguishers. In 1987, the United Nations Environment Programme (UNEP) succeeded in forging an agreement to freeze CFC production at 1986 levels. It is now mandatory for all the manufacturing companies to make CFC-free refrigerators throughout the world.\n\nIn our daily activities, we generate a lot of material that are thrown away. What are some of these waste materials? What happens after we throw them away? Let us perform an activity to find answers to these questions.\n\nWe have seen in the chapter on \u2018Life Processes\u2019 that the food we eat is digested by various enzymes in our body. Have you ever wondered why the same enzyme does not break-down everything we eat? Enzymes are specific in their action, specific enzymes are needed for the break-down of a particular substance. That is why we will not get any energy if we try to eat coal! Because of this, many human-made materials like plastics will not be broken down by the action of bacteria or other saprophytes. These materials will be acted upon by physical processes like heat and pressure, but under the ambient conditions found in our environment, these persist for a long time.\n\nSubstances that are broken down by biological processes are said to be biodegradable. How many of the substances you buried were biodegradable? Substances that are not broken down in this manner are said to be non-biodegradable. These substances may be inert and simply persist in the environment for a long time or may harm the various members of the eco-system.", "doc_id": "6ceae1fa-4ea7-11ed-b208-0242ac110007"} {"source": "NCERT X Science, India", "document": "The green plants in a terrestrial ecosystem capture about 1% of the energy of sunlight that falls on their leaves and convert it into food energy.\nWhen green plants are eaten by primary consumers, a great deal of energy is lost as heat to the environment, some amount goes into digestion and in doing work and the rest goes towards growth and reproduction. An average of 10% of the food eaten is turned into its own body and made available for the next level of consumers.\nTherefore, 10% can be taken as the average value for the amount of organic matter that is present at each step and reaches the next level of consumers.\nSince so little energy is available for the next level of consumers, food chains generally consist of only three or four steps. The loss of energy at each step is so great that very little usable energy remains after four trophic levels.\nThere are generally a greater number of individuals at the lower trophic levels of an ecosystem, the greatest number is of the producers.\nThe length and complexity of food chains vary greatly. Each organism is generally eaten by two or more other kinds of organisms which in turn are eaten by several other organisms. So instead of a straight line food chain, the relationship can be shown as a series of branching lines called a food web (Fig. 15.3).\n\nFrom the energy flow diagram (Fig. 15.4), two things become clear. Firstly, the flow of energy is unidirectional. The energy that is captured by the autotrophs does not revert back to the solar input and the energy which passes to the herbivores does not come back to autotrophs. As it moves progressively through the various trophic levels it is no longer available to the previous level. Secondly, the energy available at each trophic level gets diminished progressively due to loss of energy at each level.\n\nAnother interesting aspect of food chain is how unknowingly some harmful chemicals enter our bodies through the food chain. You have read in Class IX how water gets polluted. One of the reasons is the use of several pesticides and other chemicals to protect our crops from diseases and pests. These chemicals are either washed down into the soil or into the water bodies. From the soil, these are absorbed by the plants along with water and minerals, and from the water bodies these are taken up by aquatic plants and animals. This is one of the ways in which they enter the food chain.\n\nAs these chemicals are not degradable, these get accumulated progressively at each trophic level. As human beings occupy the top level in any food chain, the maximum concentration of these chemicals get accumulated in our bodies. This phenomenon is known as biological magnification. This is the reason why our food grains such as wheat and rice, vegetables and fruits, and even meat, contain varying amounts of pesticide residues. They cannot always be removed by washing or other means.\n\nWe are an integral part of the environment. Changes in the environment affect us and our activities change the environment around us. We have already seen in Class IX how our activities pollute the environment. In this chapter, we shall be looking at two of the environmental problems in detail, that is, depletion of the ozone layer and waste disposal.\n\nOzone is a molecule formed by three atoms of oxygen. While Oxygen is essential for all aerobic forms of life, Ozone, is a deadly poison. However, at the higher levels of the atmosphere, ozone performs an essential function. It shields the surface of the earth from ultraviolet (UV) radiation from the Sun. This radiation is highly damaging to organisms, for example, it is known to cause skin cancer in human beings. Ozone at the higher levels of the atmosphere is a product of UV radiation acting on oxygen molecule. The higher energy UV radiations split apart some moleculer oxygen into free oxygen atoms.\n\nThe amount of ozone in the atmosphere began to drop sharply in the 1980s. This decrease has been linked to synthetic chemicals like chlorofluorocarbons (CFCs) which are used as refrigerants and in fire extinguishers. In 1987, the United Nations Environment Programme (UNEP) succeeded in forging an agreement to freeze CFC production at 1986 levels. It is now mandatory for all the manufacturing companies to make CFC-free refrigerators throughout the world.\n\nIn our daily activities, we generate a lot of material that are thrown away. What are some of these waste materials? What happens after we throw them away? Let us perform an activity to find answers to these questions.\n\nWe have seen in the chapter on \u2018Life Processes\u2019 that the food we eat is digested by various enzymes in our body. Have you ever wondered why the same enzyme does not break-down everything we eat? Enzymes are specific in their action, specific enzymes are needed for the break-down of a particular substance. That is why we will not get any energy if we try to eat coal! Because of this, many human-made materials like plastics will not be broken down by the action of bacteria or other saprophytes. These materials will be acted upon by physical processes like heat and pressure, but under the ambient conditions found in our environment, these persist for a long time.\n\nSubstances that are broken down by biological processes are said to be biodegradable. How many of the substances you buried were biodegradable? Substances that are not broken down in this manner are said to be non-biodegradable. These substances may be inert and simply persist in the environment for a long time or may harm the various members of the eco-system.", "doc_id": "82b4fac0-4ea7-11ed-b281-0242ac110007"} {"source": "NCERT X Science, India", "document": "We have seen in earlier classes that organisms can be grouped as producers, consumers and decomposers according to the manner in which they obtain their sustenance from the environment. Let us recall what we have learnt through the self sustaining ecosystem created by us above. Which organisms can make organic compounds like sugar and starch from inorganic substances using the radiant energy of the Sun in the presence of chlorophyll? All green plants and certain bacteria which can produce food by photosynthesis come under this category and are called the producers.\n\nOrganisms depend on the producers either directly or indirectly for their sustenance? These organisms which consume the food produced, either directly from producers or indirectly by feeding on other consumers are the consumers. Consumers can be classed variously as herbivores, carnivores, omnivores and parasites. Can you give examples for each of these categories of consumers?\n\nIn Activity 15.4 we have formed a series of organisms feeding on one another. This series or organisms taking part at various biotic levels form a food chain (Fig. 15.1). Each step or level of the food chain forms a trophic level. The autotrophs or the producers are at the first trophic level. They fix up the solar energy and make it available for heterotrophs or the consumers. The herbivores or the primary consumers come at the second, small carnivores or the secondary consumers at the third and larger carnivores or the tertiary consumers form the fourth trophic level (Fig. 15.2). We know that the food we eat acts as a fuel to provide us energy to do work. Thus the interactions among various components of the environment involves flow of energy from one component of the system to another. As we have studied, the autotrophs capture the energy present in sunlight and convert it into chemical energy. This energy supports all the activities of the living world. From autotrophs, the energy goes to the heterotrophs and decomposers. However, as we saw in the previous Chapter on \u2018Sources of Energy\u2019, when one form of energy is changed to another, some energy is lost to the environment in forms which cannot be used again. The flow of energy between various components of the environment has been extensively studied and it has been found that \u2013\nThe green plants in a terrestrial ecosystem capture about 1% of the energy of sunlight that falls on their leaves and convert it into food energy.\nWhen green plants are eaten by primary consumers, a great deal of energy is lost as heat to the environment, some amount goes into digestion and in doing work and the rest goes towards growth and reproduction. An average of 10% of the food eaten is turned into its own body and made available for the next level of consumers.\nTherefore, 10% can be taken as the average value for the amount of organic matter that is present at each step and reaches the next level of consumers.\nSince so little energy is available for the next level of consumers, food chains generally consist of only three or four steps. The loss of energy at each step is so great that very little usable energy remains after four trophic levels.\nThere are generally a greater number of individuals at the lower trophic levels of an ecosystem, the greatest number is of the producers.\nThe length and complexity of food chains vary greatly. Each organism is generally eaten by two or more other kinds of organisms which in turn are eaten by several other organisms. So instead of a straight line food chain, the relationship can be shown as a series of branching lines called a food web (Fig. 15.3).\n\nFrom the energy flow diagram (Fig. 15.4), two things become clear. Firstly, the flow of energy is unidirectional. The energy that is captured by the autotrophs does not revert back to the solar input and the energy which passes to the herbivores does not come back to autotrophs. As it moves progressively through the various trophic levels it is no longer available to the previous level. Secondly, the energy available at each trophic level gets diminished progressively due to loss of energy at each level.\n\nAnother interesting aspect of food chain is how unknowingly some harmful chemicals enter our bodies through the food chain. You have read in Class IX how water gets polluted. One of the reasons is the use of several pesticides and other chemicals to protect our crops from diseases and pests. These chemicals are either washed down into the soil or into the water bodies. From the soil, these are absorbed by the plants along with water and minerals, and from the water bodies these are taken up by aquatic plants and animals. This is one of the ways in which they enter the food chain.\n\nAs these chemicals are not degradable, these get accumulated progressively at each trophic level. As human beings occupy the top level in any food chain, the maximum concentration of these chemicals get accumulated in our bodies. This phenomenon is known as biological magnification. This is the reason why our food grains such as wheat and rice, vegetables and fruits, and even meat, contain varying amounts of pesticide residues. They cannot always be removed by washing or other means.", "doc_id": "cc553e22-4ea9-11ed-95dc-0242ac110007"} {"source": "NCERT X Science, India", "document": "The green plants in a terrestrial ecosystem capture about 1% of the energy of sunlight that falls on their leaves and convert it into food energy.\\nWhen green plants are eaten by primary consumers, a great deal of energy is lost as heat to the environment, some amount goes into digestion and in doing work and the rest goes towards growth and reproduction. An average of 10% of the food eaten is turned into its own body and made available for the next level of consumers.\\nTherefore, 10% can be taken as the average value for the amount of organic matter that is present at each step and reaches the next level of consumers.\\nSince so little energy is available for the next level of consumers, food chains generally consist of only three or four steps. The loss of energy at each step is so great that very little usable energy remains after four trophic levels.\\nThere are generally a greater number of individuals at the lower trophic levels of an ecosystem, the greatest number is of the producers.\\nThe length and complexity of food chains vary greatly. Each organism is generally eaten by two or more other kinds of organisms which in turn are eaten by several other organisms. So instead of a straight line food chain, the relationship can be shown as a series of branching lines called a food web (Fig. 15.3).\n\nFrom the energy flow diagram (Fig. 15.4), two things become clear. Firstly, the flow of energy is unidirectional. The energy that is captured by the autotrophs does not revert back to the solar input and the energy which passes to the herbivores does not come back to autotrophs. As it moves progressively through the various trophic levels it is no longer available to the previous level. Secondly, the energy available at each trophic level gets diminished progressively due to loss of energy at each level.\n\nAnother interesting aspect of food chain is how unknowingly some harmful chemicals enter our bodies through the food chain. You have read in Class IX how water gets polluted. One of the reasons is the use of several pesticides and other chemicals to protect our crops from diseases and pests. These chemicals are either washed down into the soil or into the water bodies. From the soil, these are absorbed by the plants along with water and minerals, and from the water bodies these are taken up by aquatic plants and animals. This is one of the ways in which they enter the food chain.\n\nAs these chemicals are not degradable, these get accumulated progressively at each trophic level. As human beings occupy the top level in any food chain, the maximum concentration of these chemicals get accumulated in our bodies. This phenomenon is known as biological magnification. This is the reason why our food grains such as wheat and rice, vegetables and fruits, and even meat, contain varying amounts of pesticide residues. They cannot always be removed by washing or other means.\n\nWe are an integral part of the environment. Changes in the environment affect us and our activities change the environment around us. We have already seen in Class IX how our activities pollute the environment. In this chapter, we shall be looking at two of the environmental problems in detail, that is, depletion of the ozone layer and waste disposal.\n\nOzone is a molecule formed by three atoms of oxygen. While Oxygen is essential for all aerobic forms of life, Ozone, is a deadly poison. However, at the higher levels of the atmosphere, ozone performs an essential function. It shields the surface of the earth from ultraviolet (UV) radiation from the Sun. This radiation is highly damaging to organisms, for example, it is known to cause skin cancer in human beings. Ozone at the higher levels of the atmosphere is a product of UV radiation acting on oxygen molecule. The higher energy UV radiations split apart some moleculer oxygen into free oxygen atoms.\n\nThe amount of ozone in the atmosphere began to drop sharply in the 1980s. This decrease has been linked to synthetic chemicals like chlorofluorocarbons (CFCs) which are used as refrigerants and in fire extinguishers. In 1987, the United Nations Environment Programme (UNEP) succeeded in forging an agreement to freeze CFC production at 1986 levels. It is now mandatory for all the manufacturing companies to make CFC-free refrigerators throughout the world.\n\nIn our daily activities, we generate a lot of material that are thrown away. What are some of these waste materials? What happens after we throw them away? Let us perform an activity to find answers to these questions.\n\nWe have seen in the chapter on \u2018Life Processes\u2019 that the food we eat is digested by various enzymes in our body. Have you ever wondered why the same enzyme does not break-down everything we eat? Enzymes are specific in their action, specific enzymes are needed for the break-down of a particular substance. That is why we will not get any energy if we try to eat coal! Because of this, many human-made materials like plastics will not be broken down by the action of bacteria or other saprophytes. These materials will be acted upon by physical processes like heat and pressure, but under the ambient conditions found in our environment, these persist for a long time.\n\nSubstances that are broken down by biological processes are said to be biodegradable. How many of the substances you buried were biodegradable? Substances that are not broken down in this manner are said to be non-biodegradable. These substances may be inert and simply persist in the environment for a long time or may harm the various members of the eco-system.", "doc_id": "bd12ac5a-4eaa-11ed-9f81-0242ac110007"} {"source": "NCERT X Science, India", "document": "Awareness about the problems caused by unthinkingly exploiting our resources has been a fairly recent phenomenon in our society. And once this awareness rises, some action is usually taken. You must have heard about the Ganga Action Plan. This multi-crore project came about in 1985 because the quality of the water in the Ganga was very poor. Coliform is a group of bacteria, found in human intestines, whose presence in water indicates contamination by disease-causing microorganisms.\n\nAs you can see, there are some measurable factors which are used to quantify pollution or the quality of the water that we use for various activities. Some of the pollutants are harmful even when present in very small quantities and we require sophisticated equipment to measure them. But as we learnt in Chapter 2, the pH of water is something that can easily be checked using universal indicator.\n\nWe need not feel powerless or overwhelmed by the scale of the problems because there are many things we can do to make a difference.You must have come across the five R\u2019s to save the environment: Refuse, Reduce, Reuse, Repurpose and Recycle. What do they refer to?\n\nRefuse: This means to say No to things people offer you that you don\u2019t need. Refuse to buy products that can harm you and the environment, say No to single-use plastic carry bags. \n\nReduce: This means that you use less. You save electricity by switching off unnecessary lights and fans. You save water by repairing leaky taps. Do not waste food. Can you think of other things that you can reduce the usage of ?\n\nReuse: This is actually even better than recycling because the process of recycling uses some energy. In the \u2018reuse\u2019 strategy, you simply use things again and again. Instead of throwing away used envelopes, you can reverse it and use it again. The plastic bottles in which you buy various food-items like jam or pickle can be used for storing things in the kitchen. What other items can we reuse?\n\nNot just roads and buildings, but all the things we use or consume - food, clothes, books, toys, furniture, tools and vehicles \u2013 are obtained from resources on this earth. The only thing we get from outside is energy which we receive from the Sun. Even this energy is processed by living organisms and various physical and chemical processes on the earth before we make use of it.\n\nWhy do we need to use our resources carefully? Because these are not unlimited and with the human population increasing at a tremendous rate due to improvement in health-care, the demand for all resources is increasing at an exponential rate. The management of natural resources requires a long-term perspective so that these will last for the generations to come and will not merely be exploited to the hilt for short-term gains. This management should also ensure equitable distribution of resources so that all, and not just a handful of rich and powerful people, benefit from the development of these resources.\n\nAnother factor to be considered while we exploit these natural resources is the damage we cause to the environment while these resources are either extracted or used. For example, mining causes pollution because of the large amount of slag which is discarded for every tonne of metal extracted. Hence, sustainable natural resource management demands that we plan for the safe disposal of these wastes too.\n\nThe present day global concerns for sustainable development and conservation of natural resources are of recent origin as compared to the long tradition and culture of nature conservation in our country. Principles of conservation and sustainable management were well established in the pre-historic India. Our ancient literature is full of such examples where values and sensitivity of humans towards nature was glorified and the principle of sustainability was established at its best.\n\nDuring the Vedic period, both productive as well as protective aspect of forest vegetation were emphasised. Agriculture emerged as a dominant economic activity during the later Vedic period. This was the time when the concept of cultural landscape such as sacred forests and groves, sacred corridors and a variety of ethno-forestry practices were evolved that continued to the post-Vedic period, besides a wide range of ethno-forestry practices were infused with the traditions, customs and rituals and followed as a means for protection of nature and natural resource.", "doc_id": "fccee348-4eac-11ed-98a2-0242ac110007"} {"source": "NCERT X Social Science, India", "document": "Conservation in the background of rapid decline in wildlife population and forestry has become essential. But why do we need to conserve our forests and wildlife? Conservation preserves the ecological diversity and our life support systems \u2013 water, air and soil. It also preserves the genetic diversity of plants and animals for better growth of species and breeding. For example, in agriculture, we are still dependent on traditional crop varieties. Fisheries too are heavily dependent on the maintenance of aquatic biodiversity.\n\nIn the 1960s and 1970s, conservationists demanded a national wildlife protection programme. The Indian Wildlife (Protection) Act was implemented in 1972, with various provisions for protecting habitats. An all\u0002India list of protected species was also published. The thrust of the programme was towards protecting the remaining population of certain endangered species by banning hunting, giving legal protection to their habitats, and restricting trade in wildlife. Subsequently, central and many state governments established national parks and wildlife sanctuaries about which you have already studied. The central government also announced several projects for protecting specific animals, which were gravely threatened, including the tiger, the one\u0002horned rhinoceros, the Kashmir stag or hangul, three types of crocodiles \u2013 fresh water crocodile, saltwater crocodile and the Gharial, the Asiatic lion, and others. Most recently, the Indian elephant, black buck (chinkara), the great Indian bustard (godawan) and the snow leopard, etc. have been given full or partial legal protection against hunting and trade throughout India.\n\nThe conservation projects are now focusing on biodiversity rather than on a few of its components. There is now a more intensive search for different conservation measures. Increasingly, even insects are beginning to find a place in conservation planning. In the notification under Wildlife Act of 1980 and 1986, several hundred butterflies, moths, beetles, and one dragonfly have been added to the list of protected species. In 1991, for the first time plants were also added to the list, starting with six species.\n\nEven if we want to conserve our vast forest and wildlife resources, it is rather difficult to manage, control and regulate them. In India, much of its forest and wildlife resources are either owned or managed by the government through the Forest Department or other government departments. These are classified under the following categories.\n(i) Reserved Forests: More than half of the total forest land has been declared reserved forests. Reserved forests are regarded as the most valuable as far as the conservation of forest and wildlife resources are concerned.\n(ii) Protected Forests: Almost one-third of the total forest area is protected forest, as declared by the Forest Department. This forest land are protected from any further depletion.\n(iii) Unclassed Forests: These are other forests and wastelands belonging to both government and private individuals and communities.\n\nReserved and protected forests are also referred to as permanent forest estates maintained for the purpose of producing timber and other forest produce, and for protective reasons. Madhya Pradesh has the largest area under permanent forests, constituting 75 per cent of its total forest area. Jammu and Kashmir, Andhra Pradesh, Uttarakhand, Kerala, Tamil Nadu, West Bengal, and Maharashtra have large percentages of reserved forests of its total forest area whereas Bihar, Haryana, Punjab, Himachal Pradesh, Odisha and Rajasthan have a bulk of it under protected forests. All North-eastern states and parts of Gujarat have a very high percentage of their forests as unclassed forests managed by local communities.\n\nConservation strategies are not new in our country. We often ignore that in India, forests are also home to some of the traditional communities. In some areas of India, local communities are struggling to conserve these habitats along with government officials, recognising that only this will secure their own long-term livelihood. In Sariska Tiger Reserve, Rajasthan, villagers have fought against mining by citing the Wildlife Protection Act. In many areas, villagers themselves are protecting habitats and explicitly rejecting government involvement. The inhabitants of five villages in the Alwar district of Rajasthan have declared 1,200 hectares of forest as the Bhairodev Dakav \u2018Sonchuri\u2019, declaring their own set of rules and regulations which do not allow hunting, and are protecting the wildlife against any outside encroachments.\n\nThe famous Chipko movement in the Himalayas has not only successfully resisted deforestation in several areas but has also shown that community afforestation with indigenous species can be enormously successful. Attempts to revive the traditional conservation methods or developing new methods of ecological farming are now widespread. Farmers and citizen\u2019s groups like the Beej Bachao Andolan in Tehri and Navdanya have shown that adequate levels of diversified crop production without the use of synthetic chemicals are possible and economically viable.\n\nIn India joint forest management (JFM) programme furnishes a good example for involving local communities in the management and restoration of degraded forests. The programme has been in formal existence since 1988 when the state of Odisha passed the first resolution for joint forest management. JFM depends on the formation of local (village) institutions that undertake protection activities mostly on degraded forest land managed by the forest department. In return, the members of these communities are entitled to intermediary benefits like non\u0002timber forest produces and share in the timber harvested by \u2018successful protection\u2019.\n\nThe clear lesson from the dynamics of both environmental destruction and reconstruction in India is that local communities everywhere have to be involved in some kind of natural resource management. But there is still a long way to go before local communities are at the centre\u0002stage in decision-making. Accept only those economic or developmental activities, that are people centric, environment-friendly and economically rewarding.", "doc_id": "0ba232c8-4eb0-11ed-ae6b-0242ac110007"} {"source": "NCERT X Social Science, India", "document": "Biodiversity or Biological Diversity is immensely rich in wildlife and cultivated species, diverse in form and function but closely integrated in a system through multiple network of interdependencies.\n\nIf you look around, you will be able to find that there are some animals and plants which are unique in your area. In fact, India is one of the world\u2019s richest countries in terms of its vast array of biological diversity. This is possibly twice or thrice the number yet to be discovered. You have already studied in detail about the extent and variety of forest and wildlife resources in India. You may have realised the importance of these resources in our daily life. These diverse flora and fauna are so well integrated in our daily life that we take these for granted. But, lately, they are under great stress mainy due to insensitivity to our environment.\n\nSome estimates suggest that at least 10 per cent of India\u2019s recorded wild flora and 20 per cent of its mammals are on the threatened list. Many of these would now be categorised as \u2018critical\u2019, that is on the verge of extinction like the cheetah, pink-headed duck, mountain quail, forest spotted owlet, and plants like madhuca insignis (a wild variety of mahua) and hubbardia heptaneuron,(a species of grass). In fact, no one can say how many species may have already been lost. Today, we only talk of the larger and more visible animals and plants that have become extinct but what about smaller animals like insects and plants?\n\nLarge-scale development projects have also contributed significantly to the loss of forests. Since 1951, over 5,000 sq km of forest was cleared for river valley projects. Clearing of forests is still continuing with projects like the Narmada Sagar Project in Madhya Pradesh, which would inundate 40,000 hectares of forest. Mining is another important factor behind deforestation. The Buxa Tiger Reserve in West Bengal is seriously threatened by the ongoing dolomite mining. It has disturbed the natural habitat of many species and blocked the migration route of several others, including the great Indian elephant.\n\nMany foresters and environmentalists hold the view that the greatest degrading factors behind the depletion of forest resources are grazing and fuel-wood collection. Though, there may be some substance in their argument, yet, the fact remains that a substantial part of the fuel-fodder demand is met by lopping rather than by felling entire trees. The forest ecosystems are repositories of some of the country\u2019s most valuable forest products, minerals and other resources that meet the demands of the rapidly expanding industrial-urban economy. These protected areas, thus mean different things to different people, and therein lies the fertile ground for conflicts.\n\nHabitat destruction, hunting, poaching, over-exploitation, environmental pollution, poisoning and forest fires are factors, which have led to the decline in India\u2019s biodiversity. Other important causes of environmental destruction are unequal access, inequitable consumption of resources and differential sharing of responsibility for environmental well-being. Over-population in third world countries is often cited as the cause of environmental degradation. However, an average American consumes 40 times more resources than an average Somalian. Similarly, the richest five per cent of Indian society probably cause more ecological damage because of the amount they consume than the poorest 25 per cent. The former shares minimum responsibilities for environmental well-being. The question is: who is consuming what, from where and how much?\n\nThe destruction of forests and wildlife is not just a biological issue. The biological loss is strongly correlated with the loss of cultural diversity. Such losses have increasingly marginalised and impoverished many indigenous and other forest-dependent communities, who directly depend on various components of the forest and wildlife for food, drink, medicine, culture, spirituality, etc. Within the poor, women are affected more than men. In many societies, women bear the major responsibility of collection of fuel, fodder, water and other basic subsistence needs. As these resources are depleted, the drudgery of women increases and sometimes they have to walk for more than 10 km to collect these resources. This causes serious health problems for women and negligence of home and children because of the increased hours of work, which often has serious social implications. The indirect impact of degradation such as severe drought or deforestation-induced floods, etc. also hits the poor the hardest. Poverty in these cases is a direct outcome of environmental destruction. Therefore, forest and wildlife, are vital to the quality of life and environment in the subcontinent. It is imperative to adapt to sound forest and wildlife conservation strategies.\n\nConservation in the background of rapid decline in wildlife population and forestry has become essential. But why do we need to conserve our forests and wildlife? Conservation preserves the ecological diversity and our life support systems \u2013 water, air and soil. It also preserves the genetic diversity of plants and animals for better growth of species and breeding. For example, in agriculture, we are still dependent on traditional crop varieties. Fisheries too are heavily dependent on the maintenance of aquatic biodiversity.\n\nThe conservation projects are now focusing on biodiversity rather than on a few of its components. There is now a more intensive search for different conservation measures. Increasingly, even insects are beginning to find a place in conservation planning. In the notification under Wildlife Act of 1980 and 1986, several hundred butterflies, moths, beetles, and one dragonfly have been added to the list of protected species. In 1991, for the first time plants were also added to the list, starting with six species.", "doc_id": "153657a0-4eb1-11ed-a947-0242ac110007"} {"source": "NCERT X Social Science, India", "document": "Given the abundance and renewability of water, it is difficult to imagine that we may suffer from water scarcity. The moment we speak of water shortages, we immediately associate it with regions having low rainfall or those that are drought prone. We instantaneously visualise the deserts of Rajasthan and women balancing many \u2018matkas\u2019 (earthen pots) used for collecting and storing water and travelling long distances to get water. True, the availability of water resources varies over space and time, mainly due to the variations in seasonal and annual precipitation, but water scarcity in most cases is caused by over-exploitation, excessive use and unequal access to water among different social groups. Where is then water scarcity likely to occur? As you have read in the hydrological cycle, freshwater can be obtained directly from precipitation, surface run off and groundwater.\n\nIs it possible that an area or region may have ample water resources but is still facing water scarcity? Many of our cities are such examples. Thus, water scarcity may be an outcome of large and growing population and consequent greater demands for water, and unequal access to it. A large population requires more water not only for domestic use but also to produce more food. Hence, to facilitate higher food-grain production, water resources are being over-exploited to expand irrigated areas for dry-season agriculture. Irrigated agriculture is the largest consumer of water. Now it is needed to revolutionise the agriculture through developing drought resistant crops and dry farming techniques. You may have seen in many television advertisements that most farmers have their own wells and tube-wells in their farms for irrigation to increase their produce. But have you ever wondered what this could result in? That it may lead to falling groundwater levels, adversely affecting water availability and food security of the people.\n\nPost-independent India witnessed intensive industrialisation and urbanisation, creating vast opportunities for us. Today, large industrial houses are as commonplace as the industrial units of many MNCs (Multinational Corporations). The ever-increasing number of industries has made matters worse by exerting pressure on existing freshwater resources. Industries, apart from being heavy users of water, also require power to run them. Much of this energy comes from hydroelectric power. Today, in India hydroeclectric power contributes approximately 22 per cent of the total electricity produced. Moreover, multiplying urban centres with large and dense populations and urban lifestyles have not only added to water and energy requirements but have further aggravated the problem. If you look into the housing societies or colonies in the cities, you would find that most of these have their own groundwater pumping devices to meet their water needs. Not surprisingly, we find that fragile water resources are being over-exploited and have caused their depletion in several of these cities. So far we have focused on the quantitative aspects of water scarcity. Now, let us consider another situation where water is sufficiently available to meet the needs of the people, but, the area still suffers from water scarcity. This scarcity may be due to bad quality of water. Lately, there has been a growing concern that even if there is ample water to meet the needs of the people, much of it may be polluted by domestic and industrial wastes, chemicals, pesticides and fertilisers used in agriculture, thus, making it hazardous for human use. Government of India has accorded highest priority to improve the quality of life and enhance ease of living of people especially those living in rual areas by announcing the Jal Jeevan Mission (JJM). The Goal of JJM is to enable every rural household get assured supply of potable piped water at a service level of 55 litres per capita per day regularly on long-term basis by ensuring functionality of the tap water connections.\n\nYou may have already realised that the need of the hour is to conserve and manage our water resources, to safeguard ourselves from health hazards, to ensure food security, continuation of our livelihoods and productive activities and also to prevent degradation of our natural ecosystems. Over exploitation and mismanagement of water resources will impoverish this resource and cause ecological crisis that may have profound impact on our lives.\n\nBut, how do we conserve and manage water? Archaeological and historical records show that from ancient times we have been constructing sophisticated hydraulic structures like dams built of stone rubble, reservoirs or lakes, embankments and canals for irrigation. Not surprisingly, we have continued this tradition in modern India by building dams in most of our river basins.\n\nWhat are dams and how do they help us in conserving and managing water? Dams were traditionally built to impound rivers and rainwater that could be used later to irrigate agricultural fields. Today, dams are built not just for irrigation but for electricity generation, water supply for domestic and industrial uses, flood control, recreation, inland navigation and fish breeding. Hence, dams are now referred to as multi-purpose projects where the many uses of the impounded water are integrated with one another. For example, in the Sutluj-Beas river basin, the Bhakra \u2013 Nangal project water is being used both for hydel power production and irrigation. Similarly, the Hirakud project in the Mahanadi basin integrates conservation of water with flood control. Multi-purpose projects, launched after Independence with their integrated water resources management approach, were thought of as the vehicle that would lead the nation to development and progress, overcoming the handicap of its colonial past. Jawaharlal Nehru proudly proclaimed the dams as the \u2018temples of modern India\u2019; the reason being that it would integrate development of agriculture and the village economy with rapid industrialisation and growth of the urban economy.", "doc_id": "668d743c-4eb4-11ed-9455-0242ac110007"} {"source": "NCERT X Social Science, India", "document": "Not surprisingly, we find that fragile water resources are being over-exploited and have caused their depletion in several of these cities. So far we have focused on the quantitative aspects of water scarcity. Now, let us consider another situation where water is sufficiently available to meet the needs of the people, but, the area still suffers from water scarcity. This scarcity may be due to bad quality of water. Lately, there has been a growing concern that even if there is ample water to meet the needs of the people, much of it may be polluted by domestic and industrial wastes, chemicals, pesticides and fertilisers used in agriculture, thus, making it hazardous for human use. Government of India has accorded highest priority to improve the quality of life and enhance ease of living of people especially those living in rual areas by announcing the Jal Jeevan Mission (JJM). The Goal of JJM is to enable every rural household get assured supply of potable piped water at a service level of 55 litres per capita per day regularly on long-term basis by ensuring functionality of the tap water connections.\n\nYou may have already realised that the need of the hour is to conserve and manage our water resources, to safeguard ourselves from health hazards, to ensure food security, continuation of our livelihoods and productive activities and also to prevent degradation of our natural ecosystems. Over exploitation and mismanagement of water resources will impoverish this resource and cause ecological crisis that may have profound impact on our lives.\n\nBut, how do we conserve and manage water? Archaeological and historical records show that from ancient times we have been constructing sophisticated hydraulic structures like dams built of stone rubble, reservoirs or lakes, embankments and canals for irrigation. Not surprisingly, we have continued this tradition in modern India by building dams in most of our river basins.\n\nWhat are dams and how do they help us in conserving and managing water? Dams were traditionally built to impound rivers and rainwater that could be used later to irrigate agricultural fields. Today, dams are built not just for irrigation but for electricity generation, water supply for domestic and industrial uses, flood control, recreation, inland navigation and fish breeding. Hence, dams are now referred to as multi-purpose projects where the many uses of the impounded water are integrated with one another. For example, in the Sutluj-Beas river basin, the Bhakra \u2013 Nangal project water is being used both for hydel power production and irrigation. Similarly, the Hirakud project in the Mahanadi basin integrates conservation of water with flood control. Multi-purpose projects, launched after Independence with their integrated water resources management approach, were thought of as the vehicle that would lead the nation to development and progress, overcoming the handicap of its colonial past. Jawaharlal Nehru proudly proclaimed the dams as the \u2018temples of modern India\u2019; the reason being that it would integrate development of agriculture and the village economy with rapid industrialisation and growth of the urban economy.\n\nIn recent years, multi-purpose projects and large dams have come under great scrutiny and opposition for a variety of reasons. Regulating and damming of rivers affect their natural flow causing poor sediment flow and excessive sedimentation at the bottom of the reservoir, resulting in rockier stream beds and poorer habitats for the rivers\u2019 aquatic life. Dams also fragment rivers making it difficult for aquatic fauna to migrate, especially for spawning. The reservoirs that are created on the floodplains also submerge the existing vegetation and soil leading to its decomposition over a period of time. Multi-purpose projects and large dams have also been the cause of many new environmental movements like the \u2018Narmada Bachao Andolan\u2019 and the \u2018Tehri Dam Andolan\u2019 etc. Resistance to these projects has primarily been due to the large-scale displacement of local communities. Local people often had to give up their land, livelihood and their meagre access and control over resources for the greater good of the nation. So, if the local people are not benefiting from such projects then who is benefited? Perhaps, the landowners and large farmers, industrialists and few urban centres.\n\nIrrigation has also changed the cropping pattern of many regions with farmers shifting to water intensive and commercial crops. This has great ecological consequences like salinisation of the soil. At the same time, it has transformed the social landscape i.e. increasing the social gap between the richer landowners and the landless poor. As we can see, the dams did create conflicts between people wanting different uses and benefits from the same water resources. In Gujarat, the Sabarmati-basin farmers were agitated and almost caused a riot over the higher priority given to water supply in urban areas, particularly during droughts. Inter-state water disputes are also becoming common with regard to sharing the costs and benefits of the multi-purpose project.\n\nMost of the objections to the projects arose due to their failure to achieve the purposes for which they were built. Ironically, the dams that were constructed to control floods have triggered floods due to sedimentation in the reservoir. Moreover, the big dams have mostly been unsuccessful in controlling floods at the time of excessive rainfall. You may have seen or read how the release of water from dams during heavy rains aggravated the flood situation in Maharashtra and Gujarat in 2006. The floods have not only devastated life and property but also caused extensive soil erosion. Sedimentation also meant that the flood plains were deprived of silt, a natural fertiliser, further adding on to the problem of land degradation. It was also observed that the multi-purpose projects induced earthquakes, caused water-borne diseases and pests and pollution resulting from excessive use of water.", "doc_id": "912679b4-4eb4-11ed-8172-0242ac110007"}