question
stringlengths 15
100
| context
stringlengths 18
412k
|
---|---|
if a new kind of slug were found which has a system | Movable type - wikipedia
Movable type (US English; moveable type in British English) is the system and technology of printing and typography that uses movable components to reproduce the elements of a document (usually individual letters or punctuation) usually on the medium of paper.
The world 's first movable type printing press technology for printing paper books was made of porcelain materials and was invented around AD 1040 in China during the Northern Song Dynasty by the inventor Bi Sheng (990 -- 1051). Subsequently in 1377, the world 's oldest extant movable metal print book, Jikji, was printed in Korea during the Goryeo dynasty. Because of this, the diffusion of both movable - type systems was, to some degree, limited to primarily East Asia, although various sporadic reports of movable type technology were brought back to Europe by Christian missionaries, traders and business people who were returning to Europe after having worked in China for several years and influenced the development of printing technology in Europe. Some of these medieval European accounts are still preserved in the library archives of the Vatican and Oxford University among many others. Around 1450, Johannes Gutenberg introduced the metal movable - type printing press in Europe, along with innovations in casting the type based on a matrix and hand mould. The small number of alphabetic characters needed for European languages was an important factor. Gutenberg was the first to create his type pieces from an alloy of lead, tin, and antimony -- and these materials remained standard for 550 years.
For alphabetic scripts, movable - type page setting was quicker than woodblock printing. The metal type pieces were more durable and the lettering was more uniform, leading to typography and fonts. The high quality and relatively low price of the Gutenberg Bible (1455) established the superiority of movable type in Europe and the use of printing presses spread rapidly. The printing press may be regarded as one of the key factors fostering the Renaissance and due to its effectiveness, its use spread around the globe.
The 19th - century invention of hot metal typesetting and its successors caused movable type to decline in the 20th century.
The technique of imprinting multiple copies of symbols or glyphs with a master type punch made of hard metal first developed around 3000 BC in ancient Sumer. These metal punch types can be seen as precursors of the letter punches adapted in later millennia to printing with movable metal type. Cylinder seals were used in Mesopotamia to create an impression on a surface by rolling the seal on wet clay. They were used to "sign '' documents and mark objects as the owner 's property. Cylinder seals were a related form of early typography capable of printing small page designs in relief (cameo) on wax or clay -- a miniature forerunner of rotogravure printing used by wealthy individuals to seal and certify documents. By 650 BC the ancient Greeks were using larger diameter punches to imprint small page images onto coins and tokens.
The designs of the artists who made the first coin punches were stylized with a degree of skill that could not be mistaken for common handiwork -- salient and very specific types designed to be reproduced ad infinitum. Unlike the first typefaces used to print books in the 13th century, coin types were neither combined nor printed with ink on paper, but "published '' in metal -- a more durable medium -- and survived in substantial numbers. As the portable face of ruling authority, coins were a compact form of standardized knowledge issued in large editions, an early mass medium that stabilized trade and civilization throughout the Mediterranean world of antiquity.
Seals and stamps may have been precursors to movable type. The uneven spacing of the impressions on brick stamps found in the Mesopotamian cities of Uruk and Larsa, dating from the 2nd millennium BC, has been conjectured by some archaeologists as evidence that the stamps were made using movable type. The enigmatic Minoan Phaistos Disc of 1800 -- 1600 BC has been considered by one scholar as an early example of a body of text being reproduced with reusable characters: it may have been produced by pressing pre-formed hieroglyphic "seals '' into the soft clay. A few authors even view the disc as technically meeting all definitional criteria to represent an early incidence of movable - type printing. Recently it has been alleged by Jerome Eisenberg that the disk is a forgery.
The Prüfening dedicatory inscription is medieval example of movable type stamps being used.
Following the invention of paper in the 2nd century AD during the Chinese Han Dynasty, writing materials became more portable and economical than the bones, shells, bamboo slips, metal or stone tablets, silk, etc. previously used. Yet copying books by hand was still labour - consuming. Not until the Xiping Era (172 -- 178 AD), towards the end of the Eastern Han Dynasty did sealing print and monotype appear. It was soon used for printing designs on fabrics, and later for printing texts.
Woodblock printing, invented by about the 8th century during the Tang Dynasty, worked as follows. First, the neat hand - copied script was stuck on a relatively thick and smooth board, with the front of the paper, which was so thin that it was nearly transparent, sticking to the board, and characters showing in reverse, but distinctly, so that every stroke could be easily recognized. Then carvers cut away the parts of the board that were not part of the character, so that the characters were cut in relief, completely differently from those cut intaglio. When printing, the bulging characters would have some ink spread on them and be covered by paper. With workers ' hands moving on the back of paper gently, characters would be printed on the paper. By the Song Dynasty, woodblock printing came to its heyday. Although woodblock printing played an influential role in spreading culture, there remained some apparent drawbacks. Firstly, carving the printing plate required considerable time, labour and materials; secondly, it was not convenient to store these plates; and finally, it was difficult to correct mistakes.
With woodblock printing, one printing plate could be used for tens of hundreds of books, playing a magnificent role in spreading culture. Yet carving the plate was time and labour consuming. Huge books cost years of effort. The plates needed a lot of storage space, and were often damaged by deformation, worms and corrosion. If books had a small print run, and were not reprinted, the printing plates would become nothing but waste; and worse, if a mistake was found, it was difficult to correct it without discarding the whole plate.
Bi Sheng (毕 昇 / 畢昇) (990 -- 1051) developed the first known movable - type system for printing in China around 1040 AD during the Northern Song dynasty, using ceramic materials. As described by the Chinese scholar Shen Kuo (沈括) (1031 -- 1095):
When he wished to print, he took an iron frame and set it on the iron plate. In this he placed the types, set close together. When the frame was full, the whole made one solid block of type. He then placed it near the fire to warm it. When the paste (at the back) was slightly melted, he took a smooth board and pressed it over the surface, so that the block of type became as even as a whetstone.
For each character there were several types, and for certain common characters there were twenty or more types each, in order to be prepared for the repetition of characters on the same page. When the characters were not in use he had them arranged with paper labels, one label for each rhyme - group, and kept them in wooden cases.
If one were to print only two or three copies, this method would be neither simple nor easy. But for printing hundreds or thousands of copies, it was marvelously quick. As a rule he kept two forms going. While the impression was being made from the one form, the type was being put in place on the other. When the printing of the one form was finished, the other was then ready. In this way the two forms alternated and the printing was done with great rapidity.
In 1193, Zhou Bida, an officer of Southern Song Dynasty, made a set of clay movable - type method according to the method described by Shen Kuo in his Dream Pool Essays, and printed his book Notes of The Jade Hall (《 玉堂 雜記 》).
The claim that Bi Sheng 's clay types were "fragile '' and "not practical for large - scale printing '' and "short lived '' was refuted by facts and experiments. Bao Shicheng (1775 -- 1885) wrote that baked clay moveable type was "as hard and tough as horn ''; experiments show that clay type, after being baked in an oven, becomes hard and difficult to break, such that it remains intact after being dropped from a height of two metres onto a marble floor. The length of clay movable types in China was 1 to 2 centimetres, not 2mm, thus hard as horn.
There has been an ongoing debate regarding the success of ceramic printing technology as there have been no printed materials found with ceramic movable types. However, it is historically recorded to have been used as late as 1844 in China from the Song dynasty through the Qing dynasty.
Bi Sheng (990 -- 1051) also pioneered the use of wooden movable type around 1040 AD, as described by the Chinese scholar Shen Kuo (1031 -- 1095). However, this technology was abandoned in favour of clay movable types due to the presence of wood grains and the unevenness of the wooden type after being soaked in ink.
In 1298, Wang Zhen (王 祯 / 王 禎), a Yuan dynasty governmental official of Jingde County, Anhui Province, China, re-invented a method of making movable wooden types. He made more than 30,000 wooden movable types and printed 100 copies of Records of Jingde County (《 旌 德 縣志 》), a book of more than 60,000 Chinese characters. Soon afterwards, he summarized his invention in his book A method of making moveable wooden types for printing books. Although the wooden type was more durable under the mechanical rigors of handling, repeated printing wore the character faces down, and the types could only be replaced by carving new pieces. This system was later enhanced by pressing wooden blocks into sand and casting metal types from the depression in copper, bronze, iron or tin. This new method overcame many of the shortcomings of woodblock printing. Rather than manually carving an individual block to print a single page, movable type printing allowed for the quick assembly of a page of text. Furthermore, these new, more compact type fonts could be reused and stored. The set of wafer - like metal stamp types could be assembled to form pages, inked, and page impressions taken from rubbings on cloth or paper. In 1322, a Fenghua county officer Ma Chengde (馬 称 德) in Zhejiang, made 100,000 wooded movable types and printed the 43 - volume Daxue Yanyi (《 大學 衍 義 》). Wooden movable types were used continually in China. Even as late as 1733, a 2300 - volume Wuying Palace Collected Gems Edition (《 武英 殿 聚珍 版 叢書 》) was printed with 253,500 wooden movable types on order of the Yongzheng Emperor, and completed in one year.
A number of books printed in Tangut script during the Western Xia (1038 -- 1227) period are known, of which the Auspicious Tantra of All - Reaching Union that was discovered in the ruins of Baisigou Square Pagoda in 1991 is believed to have been printed sometime during the reign of Emperor Renzong of Western Xia (1139 -- 1193). It is considered by many Chinese experts to be the earliest extant example of a book printed using wooden movable type.
The logistical problems of handling the several thousand logographs (required for full literacy in Chinese language) posed a particular difficulty. It was faster to carve one woodblock per page than to composit a page from so many different types. However, if one used movable type to produce multiple copies of the same document, the speed of printing would increase relatively.
At least 13 material finds in China indicate the invention of bronze movable type printing in China no later than the 12th century, with the country producing large - scale bronze - plate - printed paper money and formal official documents issued by the Jin (1115 -- 1234) and Southern Song (1127 -- 1279) dynasties with embedded bronze metal types for anti-counterfeit markers. Such paper - money printing might date back to the 11th - century jiaozi of Northern Song (960 -- 1127).
The typical example of this kind of bronze movable type embedded copper - block printing is a printed "check '' of the Jin Dynasty with two square holes for embedding two bronze movable - type characters, each selected from 1,000 different characters, such that each printed paper note has a different combination of markers. A copper - block printed note dated between 1215 -- 1216 in the collection of Luo Zhenyu 's Pictorial Paper Money of the Four Dynasties, 1914, shows two special characters -- one called Ziliao, the other called Zihao -- for the purpose of preventing counterfeiting; over the Ziliao there is a small character (輶) printed with movable copper type, while over the Zihao there is an empty square hole -- apparently the associated copper metal type was lost. Another sample of Song dynasty money of the same period in the collection of the Shanghai Museum has two empty square holes above Ziliao as well as Zihou, due to the loss of the two copper movable types. Song dynasty bronze block embedded with bronze metal movable type printed paper money was issued on a large scale and remained in circulation for a long time.
The 1298 book Zao Huozi Yinshufa (《 造 活字 印 书法 》 / 《 造 活字 印 書法 》) by the Yuan dynasty (1271 -- 1368) official Wang Zhen mentions tin movable type, used probably since the Southern Song dynasty (1127 -- 1279), but this was largely experimental. It was unsatisfactory due to its incompatibility with the inking process.
During the Mongol Empire (1206 -- 1405), printing using movable type spread from China to Central Asia. The Uyghurs of Central Asia used movable type, their script type adopted from the Mongol language, some with Chinese words printed between the pages -- strong evidence that the books were printed in China.
During the Ming Dynasty (1368 -- 1644), Hua Sui in 1490 used bronze type in printing books. In 1574 the massive 1000 - volume encyclopedia Imperial Readings of the Taiping Era (《 太平 御 览 》 / 《 太平 御覧 》) was printed with bronze movable type.
In 1725 the Qing Dynasty government made 250,000 bronze movable - type characters and printed 64 sets of the encyclopedic Gujin Tushu Jicheng (《 古今 图书 集成 》 / 《 古今 圖書 集成 》, Complete Collection of Illustrations and Writings from the Earliest to Current Times). Each set consisted of 5,040 volumes, making a total of 322,560 volumes printed using movable type.
In 1234 the first books known to have been printed in metallic type set were published in Goryeo Dynasty Korea. They form a set of ritual books, Sangjeong Gogeum Yemun, compiled by Choe Yun - ui.
While these books have not survived, the oldest book in the world printed in metallic movable types is Jikji, printed in Korea in 1377. The Asian Reading Room of the Library of Congress in Washington, D.C. displays examples of this metal type. Commenting on the invention of metallic types by Koreans, French scholar Henri - Jean Martin described this as "(extremely similar) to Gutenberg 's ''.
The techniques for bronze casting, used at the time for making coins (as well as bells and statues) were adapted to making metal type. The Joseon dynasty scholar Seong Hyeon (성현, 成 俔, 1439 -- 1504) records the following description of the Korean font - casting process:
At first, one cuts letters in beech wood. One fills a trough level with fine sandy (clay) of the reed - growing seashore. Wood - cut letters are pressed into the sand, then the impressions become negative and form letters (moulds). At this step, placing one trough together with another, one pours the molten bronze down into an opening. The fluid flows in, filling these negative moulds, one by one becoming type. Lastly, one scrapes and files off the irregularities, and piles them up to be arranged.
A potential solution to the linguistic and cultural bottleneck that held back movable type in Korea for 200 years appeared in the early 15th century -- a generation before Gutenberg would begin working on his own movable - type invention in Europe -- when Sejong the Great devised a simplified alphabet of 24 characters (hangul) for use by the common people, which could have made the typecasting and compositing process more feasible. But Korea 's cultural elite, "appalled at the idea of losing hanja, the badge of their elitism '', stifled the adoption of the new alphabet.
A "Confucian prohibition on the commercialization of printing '' also obstructed the proliferation of movable type, restricting the distribution of books produced using the new method to the government. The technique was restricted to use by the royal foundry for official state publications only, where the focus was on reprinting Chinese classics lost in 1126 when Korea 's libraries and palaces had perished in a conflict between dynasties.
Scholarly debate and speculation has occurred as to whether Eastern movable type spread to Europe between the late 14th century and early 15th centuries.
Johannes Gutenberg of Mainz, Germany is acknowledged as the first to invent a metal movable - type printing system in Europe, the printing press. Gutenberg was a goldsmith familiar with techniques of cutting punches for making coins from moulds. Between 1436 and 1450 he developed hardware and techniques for casting letters from matrices using a device called the hand mould. Gutenberg 's key invention and contribution to movable - type printing in Europe, the hand mould, was the first practical means of making cheap copies of letterpunches in the vast quantities needed to print complete books, making the movable - type printing process a viable enterprise.
Before Gutenberg, books were copied out by hand on scrolls and paper, or printed from hand - carved wooden blocks. It was extremely time - consuming; even a small book could take months to complete, and because the carved letters or blocks were flimsy and the wood susceptible to ink the blocks had a limited lifespan.
Gutenberg and his associates developed oil - based inks ideally suited to printing with a press on paper, and the first Latin typefaces. His method of casting type may have been different from the hand mould used in subsequent decades. Detailed analysis of the type used in his 42 - line Bible has revealed irregularities in some of the characters that can not be attributed to ink spread or type wear under the pressure of the press. Scholars conjecture that the type pieces may have been cast from a series of matrices made with a series of individual stroke punches, producing many different versions of the same glyph.
It has also been suggested that the method used by Gutenberg involved using a single punch to make a mould, but the mould was such that the process of taking the type out disturbed the casting, creating variants and anomalies, and that the punch - matrix system came into use possibly around the 1470s. This raises the possibility that the development of movable type in the West may have been progressive rather than a single innovation.
Gutenberg 's movable - type printing system spread rapidly across Europe, from the single Mainz printing press in 1457 to 110 presses by 1480, of which 50 were in Italy. Venice quickly became the center of typographic and printing activity. Significant were the contributions of Nicolas Jenson, Francesco Griffo, Aldus Manutius, and other printers of late 15th - century Europe.
Type - founding as practiced in Europe and the west consists of three stages.
The type - height was quite different in different countries. The Monotype Corporation Limited in London UK produced moulds in various heights:
A Dutch printers manual mentions a tiny difference between French and German Height:
Tiny differences in type - height will cause quite bold images of characters.
At the end of the 19th century there were only two typefoundries left in the Netherlands: Johan Enschedé & Zonen, at Haarlem, and Lettergieterij Amsterdam, voorheen Tetterode. They both had their own type - height: Enschedé: 65 23 / 24 points Didot, and Amsterdam: 66 1 / 24 points Didot. Enough difference to prevent a combined use of fonts of both typefoundries: Enschede would be to light, or otherwise the Amsterdam - font would print rather bold. A perfect way of binding clients.
In 1905 the Dutch governmental "Algemeene Landsdrukkerij '', later: "State - printery '' (Staatsdrukkerij) decided during a reorganisation to use a standard type - height of 63 points Didot. "Staatdrukkerij - hoogte '', actually Belgium - height, but this fact was not widely known.
Modern, factory - produced movable type was available in the late 19th century. It was held in the printing shop in a job case, a drawer about 2 inches high, a yard wide, and about two feet deep, with many small compartments for the various letters and ligatures. The most popular and accepted of the job case designs in America was the California Job Case, which took its name from the Pacific coast location of the foundries that made the case popular.
Traditionally, the capital letters were stored in a separate drawer or case that was located above the case that held the other letters; this is why capital letters are called "upper case '' characters while the non-capitals are "lower case ''.
Compartments also held spacers, which are blocks of blank type used to separate words and fill out a line of type, such as em and en quads (quadrats, or spaces. A quadrat is a block of type whose face is lower than the printing letters so that it does not itself print.). An em space was the width of a capital letter "M '' -- as wide as it was high -- while an en space referred to a space half the width of its height (usually the dimensions for a capital "N '').
Individual letters are assembled into words and lines of text with the aid of a composing stick, and the whole assembly is tightly bound together to make up a page image called a forme, where all letter faces are exactly the same height to form a flat surface of type. The forme is mounted on a printing press, a thin coating of viscous ink is applied and impressions made on paper under great pressure in the press. "Sorts '' is the term given to special characters not freely available in the typical type case, such as the "@ '' mark.
Sometimes it is erroneously stated that printing with metal type replaced the earlier methods. In the industrial era printing methods would be chosen to suit the purpose. For example, when printing large scale letters in posters etc. the metal type would have proved too heavy and economically unviable. Thus, large scale type was made as carved wood blocks as well as ceramics plates. Also in many cases where large scale text was required, it was simpler to hand the job to a sign painter than a printer. Images could be printed together with movable type if they were made as woodcuts or wood engravings as long as the blocks were made to the same type height. If intaglio methods, such as copper plates, were used for the images, then images and the text would have required separate print runs on different machines.
|
who invented the barometer what does it measure | Barometer - wikipedia
A barometer is a scientific instrument used in meteorology to measure atmospheric pressure. Pressure tendency can forecast short term changes in the weather. Numerous measurements of air pressure are used within surface weather analysis to help find surface troughs, high pressure systems and frontal boundaries.
Barometers and pressure altimeters (the most basic and common type of altimeter) are essentially the same instrument, but used for different purposes. An altimeter is intended to be transported from place to place matching the atmospheric pressure to the corresponding altitude, while a barometer is kept stationary and measures subtle pressure changes caused by weather. The main exception to this is ships at sea, which can use a barometer because their elevation does not change.
Although Evangelista Torricelli is universally credited with inventing the barometer in 1643, historical documentation also suggests Gasparo Berti, an Italian mathematician and astronomer, unintentionally built a water barometer sometime between 1640 and 1643. French scientist and philosopher René Descartes described the design of an experiment to determine atmospheric pressure as early as 1631, but there is no evidence that he built a working barometer at that time.
On July 27, 1630, Giovanni Battista Baliani wrote a letter to Galileo Galilei explaining an experiment he had made in which a siphon, led over a hill about twenty - one meters high, failed to work. Galileo responded with an explanation of the phenomenon: he proposed that it was the power of a vacuum that held the water up, and at a certain height the amount of water simply became too much and the force could not hold any more, like a cord that can support only so much weight. This was a restatement of the theory of horror vacui ("nature abhors a vacuum ''), which dates to Aristotle, and which Galileo restated as resistenza del vacuo.
Galileo 's ideas reached Rome in December 1638 in his Discorsi. Raffaele Magiotti and Gasparo Berti were excited by these ideas, and decided to seek a better way to attempt to produce a vacuum other than with a siphon. Magiotti devised such an experiment, and sometime between 1639 and 1641, Berti (with Magiotti, Athanasius Kircher and Niccolò Zucchi present) carried it out.
Four accounts of Berti 's experiment exist, but a simple model of his experiment consisted of filling with water a long tube that had both ends plugged, then standing the tube in a basin already full of water. The bottom end of the tube was opened, and water that had been inside of it poured out into the basin. However, only part of the water in the tube flowed out, and the level of the water inside the tube stayed at an exact level, which happened to be 10.3 m, the same height Baliani and Galileo had observed that was limited by the siphon. What was most important about this experiment was that the lowering water had left a space above it in the tube which had no intermediate contact with air to fill it up. This seemed to suggest the possibility of a vacuum existing in the space above the water.
Torricelli, a friend and student of Galileo, interpreted the results of the experiments in a novel way. He proposed that the weight of the atmosphere, not an attracting force of the vacuum, held the water in the tube. In a letter to Michelangelo Ricci in 1644 concerning the experiments, he wrote:
Many have said that a vacuum does not exist, others that it does exist in spite of the repugnance of nature and with difficulty; I know of no one who has said that it exists without difficulty and without a resistance from nature. I argued thus: If there can be found a manifest cause from which the resistance can be derived which is felt if we try to make a vacuum, it seems to me foolish to try to attribute to vacuum those operations which follow evidently from some other cause; and so by making some very easy calculations, I found that the cause assigned by me (that is, the weight of the atmosphere) ought by itself alone to offer a greater resistance than it does when we try to produce a vacuum.
It was traditionally thought (especially by the Aristotelians) that the air did not have lateral weight: that is, that the kilometers of air above the surface did not exert any weight on the bodies below it. Even Galileo had accepted the weightlessness of air as a simple truth. Torricelli questioned that assumption, and instead proposed that air had weight and that it was the latter (not the attracting force of the vacuum) which held (or rather, pushed) up the column of water. He thought that the level the water stayed at (c. 10.3 m) was reflective of the force of the air 's weight pushing on it (specifically, pushing on the water in the basin and thus limiting how much water can fall from the tube into it). In other words, he viewed the barometer as a balance, an instrument for measurement (as opposed to merely being an instrument to create a vacuum), and because he was the first to view it this way, he is traditionally considered the inventor of the barometer (in the sense in which we now use the term).
Because of rumors circulating in Torricelli 's gossipy Italian neighborhood, which included that he was engaged in some form of sorcery or witchcraft, Torricelli realized he had to keep his experiment secret to avoid the risk of being arrested. He needed to use a liquid that was heavier than water, and from his previous association and suggestions by Galileo, he deduced by using mercury, a shorter tube could be used. With mercury, which is about 14 times denser than water, a tube only 80 cm was now needed, not 10.5 m.
In 1646, Blaise Pascal along with Pierre Petit, had repeated and perfected Torricelli 's experiment after hearing about it from Marin Mersenne, who himself had been shown the experiment by Torricelli toward the end of 1644. Pascal further devised an experiment to test the Aristotelian proposition that it was vapors from the liquid that filled the space in a barometer. His experiment compared water with wine, and since the latter was considered more "spiritous '', the Aristotelians expected the wine to stand lower (since more vapors would mean more pushing down on the liquid column). Pascal performed the experiment publicly, inviting the Aristotelians to predict the outcome beforehand. The Aristotelians predicted the wine would stand lower. It did not.
However, Pascal went even further to test the mechanical theory. If, as suspected by mechanical philosophers like Torricelli and Pascal, air had lateral weight, the weight of the air would be less at higher altitudes. Therefore, Pascal wrote to his brother - in - law, Florin Perier, who lived near a mountain called the Puy de Dome, asking him to perform a crucial experiment. Perier was to take a barometer up the Puy de Dome and make measurements along the way of the height of the column of mercury. He was then to compare it to measurements taken at the foot of the mountain to see if those measurements taken higher up were in fact smaller. In September 1648, Perier carefully and meticulously carried out the experiment, and found that Pascal 's predictions had been correct. The mercury barometer stood lower the higher one went.
The concept that decreasing atmospheric pressure predicts stormy weather, postulated by Lucien Vidi, provides the theoretical basis for a weather prediction device called a "weather glass '' or a "Goethe barometer '' (named for Johann Wolfgang Von Goethe, the renowned German writer and polymath who developed a simple but effective weather ball barometer using the principles developed by Torricelli). The French name, le baromètre Liègeois, is used by some English speakers. This name reflects the origins of many early weather glasses - the glass blowers of Liège, Belgium.
The weather ball barometer consists of a glass container with a sealed body, half filled with water. A narrow spout connects to the body below the water level and rises above the water level. The narrow spout is open to the atmosphere. When the air pressure is lower than it was at the time the body was sealed, the water level in the spout will rise above the water level in the body; when the air pressure is higher, the water level in the spout will drop below the water level in the body. A variation of this type of barometer can be easily made at home.
A mercury barometer has a glass tube closed at one end with an open mercury - filled reservoir at the base. The weight of the mercury creates a vacuum in the top of the tube known as Torricellian vacuum. Mercury in the tube adjusts until the weight of the mercury column balances the atmospheric force exerted on the reservoir. High atmospheric pressure places more force on the reservoir, forcing mercury higher in the column. Low pressure allows the mercury to drop to a lower level in the column by lowering the force placed on the reservoir. Since higher temperature levels around the instrument will reduce the density of the mercury, the scale for reading the height of the mercury is adjusted to compensate for this effect. The tube has to be at least as long as the amount dipping in the mercury + head space + the maximum length of the column.
Torricelli documented that the height of the mercury in a barometer changed slightly each day and concluded that this was due to the changing pressure in the atmosphere. He wrote: "We live submerged at the bottom of an ocean of elementary air, which is known by incontestable experiments to have weight ''.
The mercury barometer 's design gives rise to the expression of atmospheric pressure in inches or millimeters or feet (torr): the pressure is quoted as the level of the mercury 's height in the vertical column. Typically, atmospheric pressure is measured between 26.5 inches (670 mm) and 31.5 inches (800 mm) of Hg. One atmosphere (1 atm) is equivalent to 29.92 inches (760 mm) of mercury.
Design changes to make the instrument more sensitive, simpler to read, and easier to transport resulted in variations such as the basin, siphon, wheel, cistern, Fortin, multiple folded, stereometric, and balance barometers. Fitzroy barometers combine the standard mercury barometer with a thermometer, as well as a guide of how to interpret pressure changes. Fortin barometers use a variable displacement mercury cistern, usually constructed with a thumbscrew pressing on a leather diaphragm bottom. This compensates for displacement of mercury in the column with varying pressure. To use a Fortin barometer, the level of mercury is set to the zero level before the pressure is read on the column. Some models also employ a valve for closing the cistern, enabling the mercury column to be forced to the top of the column for transport. This prevents water - hammer damage to the column in transit.
On June 5, 2007, a European Union directive was enacted to restrict the sale of mercury, thus effectively ending the production of new mercury barometers in Europe.
Using vacuum pump oil as the working fluid in a barometer has led to the creation of the new "World 's Tallest Barometer '' in February 2013. The barometer at Portland State University (PSU) uses doubly distilled vacuum pump oil and has a nominal height of about 12.4 m for the oil column height; expected excursions are in the range of ± 0.4 m over the course of a year. Vacuum pump oil has very low vapor pressure and it is available in a range of densities; the lowest density vacuum oil was chosen for the PSU barometer to maximize the oil column height.
An aneroid barometer is an instrument for measuring pressure as a method that does not involve liquid. Invented in 1844 by French scientist Lucien Vidi, the aneroid barometer uses a small, flexible metal box called an aneroid cell (capsule), which is made from an alloy of beryllium and copper. The evacuated capsule (or usually several capsules, stacked to add up their movements) is prevented from collapsing by a strong spring. Small changes in external air pressure cause the cell to expand or contract. This expansion and contraction drives mechanical levers such that the tiny movements of the capsule are amplified and displayed on the face of the aneroid barometer. Many models include a manually set needle which is used to mark the current measurement so a change can be seen. In addition, the mechanism is made deliberately "stiff '' so that tapping the barometer reveals whether the pressure is rising or falling as the pointer moves. This type of barometer is common in homes and in recreational boats, as well as small aircraft. It is also used in meteorology, mostly in barographs and as a pressure instrument in radiosondes.
A barograph records a graph of atmospheric pressure.
Microelectromechanical systems (or MEMS) barometers are extremely small devices between 1 and 100 micrometres in size (i.e. 0.001 to 0.1 mm). They are created via photolithography or photochemical machining. Typical applications include miniaturized weather stations, electronic barometers and altimeters.
A barometer can also be found in smartphones such as the Samsung Galaxy Nexus, Samsung Galaxy S3 - S6, Motorola Xoom, Apple iPhone 6 smartphones, and Timex Expedition WS4 smartwatch, based on MEMS and piezoresistive pressure - sensing technologies. Inclusion of barometers on smartphones was originally intended to provide a faster GPS lock. However, third party researchers were unable to confirm additional GPS accuracy or lock speed due to barometric readings. The researchers suggest that the inclusion of barometers in smartphones may provide a solution to determining a user 's elevation, but also suggest that several pitfalls must first be overcome.
There are many other more unusual types of barometer. From variations on the storm barometer, such as the Collins Patent Table Barometer, to more traditional - looking designs such as Hooke 's Otheometer and the Ross Sympiesometer. Some, such as the Shark Oil barometer, work only in a certain temperature range, achieved in warmer climates.
Barometric pressure and the pressure tendency (the change of pressure over time) have been used in weather forecasting since the late 19th century. When used in combination with wind observations, reasonably accurate short - term forecasts can be made. Simultaneous barometric readings from across a network of weather stations allow maps of air pressure to be produced, which were the first form of the modern weather map when created in the 19th century. Isobars, lines of equal pressure, when drawn on such a map, give a contour map showing areas of high and low pressure. Localized high atmospheric pressure acts as a barrier to approaching weather systems, diverting their course. Atmospheric lift caused by low - level wind convergence into the surface brings clouds and sometimes precipitation. The larger the change in pressure, especially if more than 3.5 hPa (0.1 inHg), the greater the change in weather that can be expected. If the pressure drop is rapid, a low pressure system is approaching, and there is a greater chance of rain. Rapid pressure rises, such as in the wake of a cold front, are associated with improving weather conditions, such as clearing skies.
With falling air pressure, gases trapped within the coal in deep mines can escape more freely. Thus low pressure increases the risk of firedamp accumulating. Collieries therefore keep track of the pressure. In the case of the Trimdon Grange colliery disaster of 1882 the mines inspector drew attention to the records and in the report stated "the conditions of atmosphere and temperature may be taken to have reached a dangerous point ''.
Aneroid barometers are used in scuba diving. A submersible pressure gauge is used to keep track of the contents of the diver 's air tank. Another gauge is used to measure the hydrostatic pressure, usually expressed as a depth of sea water. Either or both gauges may be replaced with electronic variants or a dive computer.
The density of mercury will change with increase or decrease in temperature, so a reading must be adjusted for the temperature of the instrument. For this purpose a mercury thermometer is usually mounted on the instrument. Temperature compensation of an aneroid barometer is accomplished by including a bi-metal element in the mechanical linkages. Aneroid barometers sold for domestic use typically have no compensation under the assumption that they will be used within a controlled room temperature range.
As the air pressure decreases at altitudes above sea level (and increases below sea level) the uncorrected reading of the barometer will depend on its location. The reading is then adjusted to an equivalent sea - level pressure for purposes of reporting. For example, if a barometer located at sea level and under fair weather conditions is moved to an altitude of 1,000 feet (305 m), about 1 inch of mercury (~ 35 hPa) must be added on to the reading. The barometer readings at the two locations should be the same if there are negligible changes in time, horizontal distance, and temperature. If this were not done, there would be a false indication of an approaching storm at the higher elevation.
Aneroid barometers have a mechanical adjustment that allows the equivalent sea level pressure to be read directly and without further adjustment if the instrument is not moved to a different altitude. Setting an aneroid barometer is similar to resetting an analog clock that is not at the correct time. Its dial is rotated so that the current atmospheric pressure from a known accurate and nearby barometer (such as the local weather station) is displayed. No calculation is needed, as the source barometer reading has already been converted to equivalent sea - level pressure, and this is transferred to the barometer being set -- regardless of its altitude. Though somewhat rare, a few aneroid barometers intended for monitoring the weather are calibrated to manually adjust for altitude. In this case, knowing either the altitude or the current atmospheric pressure would be sufficient for future accurate readings.
The table below shows examples for three locations in the city of San Francisco, California. Note the corrected barometer readings are identical, and based on equivalent sea - level pressure. (Assume a temperature of 15 ° C.)
When atmospheric pressure is measured by a barometer, the pressure is also referred to as the "barometric pressure ''. Assume a barometer with a cross-sectional area A, a height h, filled with mercury from the bottom at Point B to the top at Point C. The pressure at the bottom of the barometer, Point B, is equal to the atmospheric pressure. The pressure at the very top, Point C, can be taken as zero because there is only mercury vapor above this point and its pressure is very low relative to the atmospheric pressure. Therefore, one can find the atmospheric pressure using the barometer and this equation:
P = ρgh
where ρ is the density of mercury, g is the gravitational acceleration, and h is the height of the mercury column above the free surface area. The physical dimensions (length of tube and cross-sectional area of the tube) of the barometer itself have no effect on the height of the fluid column in the tube.
In thermodynamic calculations, a commonly used pressure unit is the "standard atmosphere ''. This is the pressure resulting from a column of mercury of 760 mm in height at 0 ° C. For the density of mercury, use ρ = 13,595 kg / m and for gravitational acceleration use g = 9.807 m / s.
If water were used (instead of mercury) to meet the standard atmospheric pressure, a water column of roughly 10.3 m (33.8 ft) would be needed.
Standard atmospheric pressure as a function of elevation:
Note: 1 torr = 133.3 Pa = 0.03937 In Hg
|
who was the singing voice of ariel in the little mermaid | Jodi Benson - wikipedia
Jodi Marie Marzorati Benson (born October 10, 1961) is an American actress, voice actress and soprano singer. She is best known for providing both the speaking and the singing voice of Disney 's Princess Ariel in The Little Mermaid and its sequel, prequel, and television series spinoff. Benson voiced the character Barbie in the 1999 movie Toy Story 2 and its 2010 Academy Award - winning sequel Toy Story 3. She also voiced Barbie in the Toy Story cartoon Hawaiian Vacation. For her contributions to the Disney company, Benson was named a Disney Legend in 2011.
Benson was the original voice of Ariel in the Academy Award - winning Walt Disney Pictures animated feature film The Little Mermaid and continues to perform Ariel and the bubbly voice of "Barbie '' in Disney / Pixar 's Best Picture Golden Globe winner Toy Story 2 and Academy Award winner Toy Story 3. She also gave voice to the spirited "Weebo '' in Disney 's live action Flubber, starring Robin Williams. For Warner Bros., she created the voice of Thumbelina in 1994, a Don Bluth animated feature film with songs by Barry Manilow. Her other projects include Tinkerbell: Secret of the Wings, The Little Mermaid: Ariel 's Beginning, The Little Mermaid II: Return to the Sea, Lady and the Tramp II: Scamp 's Adventure, 101 Dalmatians II: Patch 's London Adventure, Balto II: Wolf Quest, and Balto III: Wings of Change. She appeared as Patrick Dempsey 's assistant Sam, in Disney 's live - action feature film Enchanted. While being a Disney Legend, she also voiced Jane Doe and Patsy Smiles in Cartoon Network 's Camp Lazlo. She also voiced the character Tula in Fox 's animated series The Pirates of Dark Water.
Benson made her debut in the 1975 Kenny Ortega - directed "Marilyn: An American Fable ''. Other Broadway credits include a starring role in the Broadway musical Smile, where she introduced a song called "Disneyland ''. In 1983, Howard Ashman, the lyricist of Smile, would go on to write the lyrics for The Little Mermaid. She describes the song "Disneyland '' at the "Smile '' Reunion concert held on Sept. 22, 2014, "This is the first piece of the puzzle of my life, the first step of the journey, so to speak ''. Benson also sings "Disneyland '' on a compilation CD called Unsung Musicals. In 1989, Benson appeared in the Broadway musical, Welcome to the Club, alongside Samuel E. Wright, who performed the voice for Sebastian the Crab in The Little Mermaid.
In 1992, Benson received a Tony Award nomination for Best Actress in a Musical for her role as Polly Baker in Crazy For You. She played the narrator in Joseph And The Amazing Technicolor Dreamcoat in 1998.
Benson also played the Queen in a one - night concert version of Rodgers & Hammerstein 's Cinderella at the Nashville Symphony Orchestra in May 2010.
She was at the 2012 SYTA conference singing her signature song "Part of Your World '' on August 27, 2012.
Benson has been the guest artist for the Candlelight Processional for five years at Walt Disney World including December 10 -- 13, 2012.
She joined the "2013 Spring Pops '' on May 14 -- 15, 2013 as a guest soloist with the Boston Pops.
Benson can be heard on over a dozen recordings and has a six - part DVD series entitled Baby Faith from the creators of Baby Einstein. Her animated TV series include the Emmy Award - winning Camp Lazlo for the Cartoon Network, The Little Mermaid, Batman Beyond, The Grim Adventures of Billy and Mandy, The Wild Thornberrys, Barbie, Hercules: Zero to Hero, P.J. Sparkles, and the series Sofia the First for Disney.
On the concert stage, Benson has performed as a concert soloist with symphonies all over the world, including The Boston Pops, The Philly Pops (conductor: Peter Nero), The Hollywood Bowl Orchestra (conductor: John Mauceri), The National Symphony (conductor: Marvin Hamlisch), Cleveland, Dallas, Tokyo, and the San Francisco and Chicago Symphonies. She starred in the Kennedy Center Honors for Ginger Rogers, and in Disney 's Premiere in Central Park with Pocahontas, The Walt Disney World 25th Anniversary Spectacular and Disney 's 100 Years of Magic. Benson is the resident guest soloist for the Walt Disney Company / Disney Cruise Line and ambassador for feature animation.
On June 6, 2016, Benson performed the role of Ariel at the Hollywood Bowl 's concert performance of The Little Mermaid.
In late 1986, Benson first heard of the audition for The Little Mermaid through lyricist and playwright Howard Ashman. The two had just worked together in the Broadway show Smile until its run ended early. He knew she would be the perfect fit for the role. After hearing the demo for "Part of Your World '', she sang a small part of it on tape where it was later sent to Disney executives. Before her audition for The Little Mermaid, she was primarily a stage actress. It was Ashman 's first Disney project. In early 1988, Benson won the role of Ariel.
Benson was born and raised in a Catholic environment, graduating from Boylan Central Catholic High School in Rockford, Illinois. She married actor / singer Ray Benson in 1984. They have two children, McKinley and Delaney.
|
world record time for men's 110m hurdles | 110 metres hurdles - wikipedia
The 110 metres hurdles, or 110 - meter hurdles, is a hurdling track and field event for men. It is included in the athletics programme at the Summer Olympic Games. The female counterpart is the 100 metres hurdles. As part of a racing event, ten hurdles of 1.067 metres (3.5 ft or 42 inches) in height are evenly spaced along a straight course of 110 metres. They are positioned so that they will fall over if bumped into by the runner. Fallen hurdles do not carry a fixed time penalty for the runners, but they have a significant pull - over weight which slows down the run. Like the 100 metres sprint, the 110 metres hurdles begins in the starting blocks.
For the 110 m hurdles, the first hurdle is placed after a run - up of 13.72 metres (45 ft) from the starting line. The next nine hurdles are set at a distance of 9.14 metres (30 ft) from each other, and the home stretch from the last hurdle to the finish line is 14.02 metres (46 ft) long.
The Olympic Games have included the 110 metre hurdles in their program since 1896. The equivalent hurdles race for women was run over a course of 80 metres from 1932 to 1968. Starting with the 1972 Summer Olympics, the women 's race was set at 100 metres. In the early 20th century, the race was often contested as 120 yard hurdles, thus the Imperial units distances between hurdles.
The fastest 110 metre hurdlers run the distance in around 13 seconds. Aries Merritt of the United States holds the current world record of 12.80 seconds, set at the Memorial Van Damme meet on 7 September 2012 in Belgium.
For the first hurdles races in England around 1830, wooden barriers were placed along a stretch of 100 yards (91.44 m).
The first standards were attempted in 1864 in Oxford and Cambridge: The length of the course was set to 120 yards (109.7 m) and over its course, runners were required to clear ten 3 foot 6 inch (1.07 m) high hurdles. The height and spacing of the hurdles have been related to Imperial units ever since. After the length of the course was rounded up to 110 metres in France in 1888, the standards were pretty much complete (except for Germany where 1 metre high hurdles were used until 1907).
The massively constructed hurdles of the early days were first replaced in 1895 with somewhat lighter T - shaped hurdles that runners were able to knock over. However, until 1935 runners were disqualified if they knocked down more than three hurdles, and records were only recognized if the runner had left all hurdles standing.
In 1935 the T - shaped hurdles were replaced by L - shaped ones that easily fall forward if bumped into and therefore reduce the risk of injury. However those hurdles are weighted so it is disadvantageous to hit them.
The current running style where the first hurdle is taken on the run with the upper body lowered instead of being jumped over and with three steps each between the hurdles was first used by the 1900 Olympic champion, Alvin Kraenzlein.
The 110 metre hurdles have been an Olympic discipline since 1896. Women ran it occasionally in the 1920s but it never became generally accepted. From 1926 on, women have only run the 80 metre hurdles which was increased to 100 metres starting in 1961 on a trial basis and in 1969 in official competition.
In 1900 and 1904, the Olympics also included a 200 - metre hurdles race, and the IAAF recognized world records for the 200 metre hurdles until 1960. Don Styron held the world record in the event for over 50 years until Andy Turner broke the record in a specially arranged race at the Manchester City Games in 2010. Styron still holds the world record in the 220 yard low hurdles.
The sprint hurdles are a very rhythmic race because both men and women take 3 steps (meaning 4 foot strikes) between each hurdle, no matter whether running 110 / 100 meters outdoors, or the shorter distances indoors (55 or 60 meters). In addition, the distance from the starting line to the first hurdle - while shorter for women - is constant for both sexes whether indoors or outdoors, so sprint hurdlers do not need to change their stride pattern between indoor and outdoor seasons. One difference between indoor and outdoors is the shorter finishing distance from the last (5th) hurdle indoors, compared to longer distance from the last (10th) hurdle outdoors to the finish line.
Top male hurdlers traditionally took 8 strides from the starting blocks to the first hurdle (indoors and outdoors). The 8 - step start persisted from (at least) the 1950s to the end of the 20th century and included such World - and Olympic champions as Harrison Dillard, Rod Milburn, Greg Foster, Renaldo Nehemiah, Roger Kingdom, Allen Johnson, Mark Crear, Mark McCoy, and Colin Jackson. However, beginning in the 2000s, some hurdle coaches embraced a transition to a faster 7 - step start, teaching the men to lengthen their first few strides out of the starting blocks. Cuban hurdler Dayron Robles set his 2008 world record of 12.87 using a 7 - step start. Chinese star Liu Xiang won the 2004 Olympics and broke the world record in 2006 utilizing an 8 - step approach, but he switched to 7 - steps by the 2011 outdoor season. After the 2010 outdoor season, American Jason Richardson trained to switch to a 7 - step start and went on to win the 2011 World Championship. American Aries Merritt trained in Fall 2011 to switch from 8 to 7, and then had his greatest outdoor season in 2012 - running 8 races in under 13 seconds - capped by winning the London 2012 Olympics and then setting a world record of 12.80.
Of the 10 men with the fastest 110m hurdle times in 2012, seven used 7 - steps, including the top 4: Aries Merritt, Liu Xiang, Jason Richardson, and David Oliver. Hurdle technique experts believe the off - season training required to produce the power and speed necessary to reach the first hurdle in 7 steps, yields greater endurance over the last half of the race. That added endurance allows hurdlers to maintain their top speed to the finish, resulting in a faster time.
Below is a list of all other legal times inside 12.96:
Athletes with two or more victories at the Olympic Games & World Championships:
5 wins:
3 wins:
2 wins:
|
where is located (approximately) the earth south magnetic pole | South Magnetic Pole - wikipedia
The South Magnetic Pole is the wandering point on the Earth 's Southern Hemisphere where the geomagnetic field lines are directed vertically upwards. It should not be confused with the South Geomagnetic Pole described later.
For historical reasons, the "end '' of a freely hanging magnet that points (roughly) north is itself called the "north pole '' of the magnet, and the other end, pointing south, is called the magnet 's "south pole ''. Because opposite poles attract, the Earth 's South Magnetic Pole is physically actually a magnetic north pole (see also North Magnetic Pole § Polarity).
The South Magnetic Pole is constantly shifting due to changes in the Earth 's magnetic field. As of 2005 it was calculated to lie at 64 ° 31 ′ 48 '' S 137 ° 51 ′ 36 '' E / 64.53000 ° S 137.86000 ° E / - 64.53000; 137.86000, placing it off the coast of Antarctica, between Adélie Land and Wilkes Land. In 2015 it lay at 64 ° 17 ′ S 136 ° 35 ′ E / 64.28 ° S 136.59 ° E / - 64.28; 136.59 (est). That point lies outside the Antarctic Circle. Due to polar drift, the pole is moving northwest by about 10 to 15 kilometres (6 to 9 mi) per year. Its current distance from the actual Geographic South Pole is approximately 2,860 km (1,780 mi). The nearest permanent science station is Dumont d'Urville Station. Wilkes Land contains a large gravitational mass concentration.
Early unsuccessful attempts to reach the magnetic south pole included those of French explorer Dumont d'Urville (1837 -- 40), American Charles Wilkes (expedition of 1838 -- 42) and Briton James Clark Ross (expedition of 1839 to 1843).
The first calculation of the magnetic inclination to locate the magnetic South Pole was made on January 23, 1838 by the hydrographer Clément Adrien Vincendon - Dumoulin (fr), a member of the Dumont d'Urville expedition in Antarctica and Oceania on the corvettes "L'Astrolabe '' and "Zélée '' in 1837 - 1840, which discovered Adelie Land.
On 16 January 1909 three men (Douglas Mawson, Edgeworth David, and Alistair Mackay) from Sir Ernest Shackleton 's Nimrod Expedition claimed to have found the South Magnetic Pole, which was at that time located on land. However, there is now some doubt as to whether their location was correct. The approximate position of the pole on 16 January 1909 was 72 ° 15 ′ S 155 ° 09 ′ E / 72.25 ° S 155.15 ° E / - 72.25; 155.15.
The South Magnetic Pole has also been estimated by fits to global sets of data such as the World Magnetic Model (WMM) and the International Geomagnetic Reference Model (IGRF). For earlier years back to about 1600, the model GUFM1 is used, based on a compilation of data from ship logs.
The Earth 's geomagnetic field can be approximated by a tilted dipole (like a bar magnet) placed at the center of the Earth. The South Geomagnetic Pole is the point where the axis of this best - fitting tilted dipole intersects the Earth 's surface in the southern hemisphere. As of 2005 it was calculated to be located at 79 ° 44 ′ S 108 ° 13 ′ E / 79.74 ° S 108.22 ° E / - 79.74; 108.22, near the Vostok Station. Because the field is not an exact dipole, the South Geomagnetic Pole does not coincide with the South Magnetic Pole. Furthermore, the South Geomagnetic Pole is wandering for the same reason its northern magnetic counterpart wanders.
|
main idea of upon the burning of our house | Verses upon the burning of our house - wikipedia
Verses upon the Burning of our House (full title: Here follow some verses upon the burning of our house, July 10, 1666) is a poem by Anne Bradstreet. She wrote it to express the traumatic loss of her home and most of her material. However, she expands the understanding that God had taken them away in order for her family to live a more pious life. She feels guilty that she is hurt from losing earthly possessions. It is against her belief that she should feel this way; showing she is a sinner. Her deep puritan beliefs brought her to accept that the loss of material was a spiritually necessary occurrence. She reminds herself that her future, and anything that has value, lies in heaven. Though she feels guilty, she knows that she is one of the fortunate ones who have salvation regardless; God gives it to his followers, and will help them fight their sin on this earth. The burning of her house was to fight her family 's sins of material idols.
The poem has a couplet - based rhyme scheme. It has many lines with an inverted syntax, making lines sound "odd ''.
In silent night when rest I took, For sorrow near I did not look, I wakened was with thundering noise And piteous shrieks of dreadful voice. That fearful sound of "Fire '' and "Fire, '' Let no man know, is my desire. I, starting up, the light did spy, And to my God my heart did cry To strengthen me in my distress, And not to leave me succourless. Then coming out, behold a space The flame consume my dwelling place. And when I could no longer look, I blest His name that gave and took, That laid my goods now in the dust; Yea, so it was, and so ' twas just. It was His own; it was not mine. Far be it that I should repine. He might of all justly bereft, But yet sufficient for us left. When by the ruins oft I passed My sorrowing eyes aside did cast And here and there the places spy Where oft I sat and long did lie. Here stood that trunk, and there that chest; There lay that store I counted best, My pleasant things in ashes lie, And them behold no more shall I. Under thy roof no guest shall sit, Nor at thy table eat a bit; No pleasant tale shall e'er be told, Nor things recounted done of old; No candle e'er shall shine in thee, Nor bridegroom 's voice e'er heard shall be. In silence ever shall thou lie. Adieu, Adieu, all 's vanity. Then straight I ' gin my heart to chide: And did thy wealth on earth abide? Didst fix thy hope on mouldring dust? The arm of flesh didst make thy trust? Raise up thy thoughts above the sky That dunghill mists away may fly. Thou hast a house on high erect; Framed by that mighty Architect, With glory richly furnished Stands permanent though this be fled. It 's purchased, and paid for, too, By him who hath enough to do - A price so vast as is unknown, Yet, by His gift, is made thine own. There 's wealth enough; I need no more. Farewell, my pelf; farewell, my store; The world no longer let me love. My hope and treasure lie above.
|
5 traditional practices of courtship in the philippines | Courtship in the Philippines - wikipedia
Traditional courtship in the Philippines is described as a "far more subdued and indirect '' approach compared to Western or Westernized cultures. It involves "phases '' or "stages '' inherent to Philippine society and culture. Evident in courtship in the Philippines is the practice of singing romantic love songs, reciting poems, writing letters, and gift - giving. This respect extends to the Filipina 's family members. The proper rules and standards in traditional Filipino courtship are set by Philippine society.
Often, a Filipino male suitor expresses his interest to a woman in a discreet and friendly manner in order to avoid being perceived as very "presumptuous or aggressive '' or arrogant. Culturally, another gentlemanly way of seeking the attention of a woman is not to be done by the admirer by approaching her in the street to casually ask for her address or telephone number. Although having a series of friendly dates is the normal starting point in the Filipino way of courting, this may also begin through the process of "teasing '', a process of "pairing off '' a potential teenage or adult couple. The teasing is done by peers or friends of the couple being matched. The teasing practice assists in discerning the actual feelings of the male and the female involved. Traditionally, a Filipino woman is "shy and secretive '' about her feelings for a suitor. On the other hand, the Filipino man fears rejection by a woman and would like to avoid losing face and embarrassment. This teasing phase actually helps in circumventing such an embarrassing predicament because formal courtship has not yet officially started. Furthermore, this "testing phase '' also helps a man who could be "torpe '', a Filipino term for a suitor who is shy, "stupid '', and feels cowardly, and is innocent and naïve in how to court a woman. However, this type of admirer could overcome his shyness and naivety by asking for the help of a "tulay '' (Filipino for "Bridge '', whose role is similar to that of the Wingman in Western Cultures), typically a mutual friend of both the suitor and the admired, or a close friend of both families. The "human bridge '' acts as the suitor 's communicator. Through this "human - bridge '', the bachelor can also ask permission to visit the woman at home from the bachelorette 's father. As a norm, the couple will not be left alone with each other during this first home visit, because formal introductions to family members are done, which may be performed by the "tulay ''. Informal conversation also takes place.
During this preliminary evaluation period, the Filipino woman will either deny her feelings (or the absence of feelings for the suitor) and avoids her admirer, or does not become angry because of the teasing and encourages the suitor instead. The suitor stops the courtship if he is quite sure that the woman does not reciprocate. But once the female encourages the suitor to continue, the "teasing stage '' comes to a close and a "serious stage '' of Philippine courtship begins. It is within this stage where the couple engages in a series of group dates, chaperoned dates, or private dates. The couple later on decides to come out into the open and reveals the status of their relationship to family members, relatives, and friends. The serious suitor or boyfriend visits the family of the woman he admires / courts or girlfriend in order to formally introduce himself, particularly to the lady 's parents. Bringing gifts or pasalubong (which may include flowers, with cards, or letters, and the like) is also typical. Courting a woman in the Philippines is described as a courtship that also includes courting the woman 's family. The actual boyfriend - girlfriend relationship may also result from such formal visits. In the past, particularly in a rural courtship setting, a Filipino man, accompanied by friends, would engage in serenading the woman he adores at night. This serenading practice was an influence adopted by the Filipinos from the Spaniards.
During the courtship process, a traditional Filipina is expected to play "hard to get '', to act as if not interested, to be not flirty, and show utmost restraint, modesty, shyness, good upbringing, be well - mannered, demure, and reserved despite having great feelings for her admirer; a behavior culturally considered appropriate while being courted. This behavior serves as a tool in measuring the admirer 's sincerity and seriousness. The woman can also have as many suitors, from which she could choose the man that she finally would want to date. Dating couples are expected to be conservative and not perform public displays of affection for each other. Traditionally, some courtship may last a number of years before the Filipino woman accepts her suitor as a boyfriend. Conservativeness, together with repressing emotions and affection, was inherited by the Filipino woman from the colonial period under the Spaniards, a characteristic referred to as the Maria Clara attitude.
After the girlfriend - boyfriend stage, engagement, and marriage follows. With regards to the engagement and pre-marriage stages, Filipino tradition dictates that the man and his parents perform the pamamanhikan or pamanhikan (literally, a Tagalog word that means "to go up the stairs of the house '' of the girlfriend and her parents; pamamanhikan is known as tampa or danon to the Ilocanos, as pasaguli to the Palaweños, and as kapamalai to the Maranaos). This is where and when the man and his parents formally ask the lady 's hand and blessings from her parents in order to marry. This is when the formal introduction of the man 's parents and woman 's parents happens. Apart from presents, the Cebuano version of the pamamanhikan includes bringing in musicians. After setting the date of the wedding and the dowry, the couple is considered officially engaged. The dowry, as a norm in the Philippines, is provided by the groom 's family. For the Filipino people, marriage is a union of two families, not just of two persons. Therefore, marrying well "enhances the good name '' of both families.
Apart from the general background explained above, there are other similar and unique courting practices adhered to by Filipinos in other different regions of the Philippine archipelago. In the island of Luzon, the Ilocanos also perform serenading, known to them as tapat (literally, "to be in front of '' the home of the courted woman), which is similar to the harana and also to the balagtasan of the Tagalogs. The suitor begins singing a romantic song, then the courted lady responds by singing too. In reality, Harana is a musical exchange of messages which can be about waiting or loving or just saying no. The suitor initiates, the lady responds. As the Pamamaalam stage sets in, the suitor sings one last song and the haranistas disappear in the night.
Rooster courtship is also another form of courting in Luzon. In this type of courtship, the rooster is assigned that task of being a "middleman '', a "negotiator '', or a "go - between '', wherein the male chicken is left to stay in the home of the courted to crow every single morning for the admired lady 's family.
In the province of Bulacan in Central Luzon, the Bulaqueños have a kind of courtship known as the naninilong (from the Tagalog word silong or "basement ''). At midnight, the suitor goes beneath the nipa hut, a house that is elevated by bamboo poles, then prickles the admired woman by using a pointed object. Once the prickling caught the attention of the sleeping lady, the couple would be conversing in whispers.
The Ifugao of northern Luzon practices a courtship called ca - i - sing (this practice is known as the ebgan to the Kalinga tribes and as pangis to the Tingguian tribes), wherein males and females are separated into "houses ''. The house for the Filipino males is called the Ato, while the house for Filipino females is known as the olog or agamang. The males visit the females in the olog -- the "betrothal house '' -- to sing romantic songs. The females reply to these songs also through singing. The ongoing courtship ritual is overseen by a married elder or a childless widow who keeps the parents of the participating males and females well informed of the progress of the courtship process.
After the courtship process, the Batangueños of Batangas has a peculiar tradition performed on the eve of the wedding. A procession, composed of the groom 's mother, father, relatives, godfathers, godmothers, bridesmaids, and groomsmen, occurs. Their purpose is to bring the cooking ingredients for the celebration to the bride 's home, where refreshments await them. When they are in the half process of the courtship, they are forced to make a baby
In Pangasinan, the Pangasinenses utilizes the taga - amo, which literally means "tamer '', a form of love potions or charms which can be rubbed to the skin of the admired. It can also be in the form of drinkable potions. The suitor may also resort to the use of palabas, meaning show or drama, wherein the Filipino woman succumbs to revealing her love to her suitor, who at one time will pretend or act as if he will be committing suicide if the lady does not divulge her true feelings.
The Apayaos allow the practice of sleeping together during the night. This is known as liberal courtship or mahal - alay in the vernacular. This form of courting assists in assessing the woman 's feeling for her lover.
In Palawan, the Palaweños or Palawanons perform courtship through the use of love riddles. This is known as the pasaguli. The purpose of the love riddles is to assess the sentiments of the parents of both suitor and admirer. After this "riddle courtship '', the discussion proceeds to the pabalic (can also be spelled as pabalik), to settle the price or form of the dowry that will be received by the courted woman from the courting man.
When courting, the Cebuanos also resort to serenading, which is known locally as balak. They also write love letters that are sent via a trusted friend or a relative of the courted woman. Presents are not only given to the woman being courted, but also to her relatives. Similar to the practice in the Pangasinan region, as mentioned above, the Cebuanos also use love potions to win the affection of the Filipino woman.
People from Leyte performs the pangagad or paninilbihan or "servitude '', instead of paying a form of dowry during the courtship period. In this form of courting, the Filipino suitor accomplishes household and farm chores for the family of the Filipino woman. The service normally lasts for approximately a year before the man and woman can get married. The Tagalogs of Luzon also refers to this courtship custom as paninilbihan meaning "being of service '', but is also referred to as subok meaning a trial or test period for the serving suitor. The Bicolanos of Luzon 's Bicol region, call this custom as the pamianan.
Reckless courtship, known in the vernacular as palabas, sarakahan tupul, or magpasumbahi, is practiced by the Tausog people of Mindanao. Similar to the palabas version practiced in Luzon island, a suitor would threaten to stab his heart while in front of the courted woman 's father. If the father of the woman refuses to give his daughter 's hand to the suitor, the suitor is smitten by a knife.
The Bagobo, on the other hand, sends a knife or a spear as a gift to the home of the courted woman for inspection. Accepting the weapon is equivalent to accepting the Filipino man 's romantic intention and advances.
Pre-arranged marriages and betrothals are common to Filipino Muslims. These formal engagements are arranged by the parents of men and the women. This also involves discussions regarding the price and the form of the dowry. The Tausog people proclaims that a wedding, a celebration or announcement known as the pangalay, will occur by playing percussive musical instruments such as the gabbang, the kulintang, and the agong. The wedding is officiated by an Imam. Readings from the Quran is a part of the ceremony, as well as the placement of the groom 's fingerprint over the bride 's forehead.
During the 19th century in Spanish Philippines, there was a set of body language expressed by courted women to communicate with their suitors. These are non-verbal cues which Ambeth Ocampo referred to as "fan language ''. These are called as such because the woman conveys her messages through silent movements that involve a hand - held fan. Examples of such speechless communication are as follows: a courted woman covering half of her face would like her suitor to follow her; counting the ribs of the folding fan sends out a message that the lady would like to have a conversation with her admirer; holding the fan using the right hand would mean the woman is willing to have a boyfriend, while carrying the fan with the left hand signifies that she already has a lover and thus no longer available; fanning vigorously symbolizes that the lady has deep feelings for a gentleman, while fanning slowly tells that the woman courted does not have any feelings for the suitor; putting the fan aside signals that the lady does not want to be wooed by the man; and the abrupt closing of a fan means the woman dislikes the man.
Through the liberalism of modern - day Filipinos, there have been modifications of courtship that are milder than that in the West. Present - day Filipino courtship, as in the traditional form, also starts with the "teasing stage '' conducted by friends. Introductions and meetings between prospective couples are now done through a common friend or whilst attending a party. Modern technology has also become a part of present - day courting practises. Romantic conversations between both parties are now through cellular phones -- particularly through texting messages -- and the internet as can be seen by the vast amount of apps & websites catering to Filipino Dating Parents, however, still prefer that their daughters be formally courted within the confines of the home, done out of respect to the father and mother of the single woman. Although a present - day Filipina wants to encourage a man to court her or even initiate the relationship, it is still traditionally "inappropriate '' for a suitor to introduce himself to an admired woman, or vice versa, while on the street. Servitude and serenading are no longer common, but avoidance of pre-marital sex is still valued.
Other than the so - called modern Philippine courtship through texting and social media, there is another modern style that is not widely discussed in public discourse: North American pickup as documented by Neil Strauss in his book The Game: Penetrating the Secret Society of Pickup Artists. While there exist a few local companies that offer pickup training, it remains to be seen whether these methods will even gain widespread acceptance since these methods, along with the paradigm from which they are rooted, flout the values of most Filipinos.
|
where is the ural mountains located on a map | Ural mountains - wikipedia
The Ural Mountains (Russian: Ура́льские го́ры, tr. Uralskiye gory; IPA: (ʊˈraljskjɪjə ˈgorɨ); Bashkir: Урал тауҙары, Ural tauźarı), or simply the Urals, are a mountain range that runs approximately from north to south through western Russia, from the coast of the Arctic Ocean to the Ural River and northwestern Kazakhstan. The mountain range forms part of the conventional boundary between the continents of Europe and Asia. Vaygach Island and the islands of Novaya Zemlya form a further continuation of the chain to the north into the Arctic Ocean.
The mountains lie within the Ural geographical region and significantly overlap with the Ural Federal District and with the Ural economic region. They have rich resources, including metal ores, coal, precious and semi-precious stones. Since the 18th century the mountains have contributed significantly to the mineral sector of the Russian economy.
As attested by Sigismund von Herberstein, in the 16th century Russians called the range by a variety of names derived from the Russian words for rock (stone) and belt. The modern Russian name for the Urals (Урал, Ural), first appearing in the 16th -- 17th century when the Russian conquest of Siberia was in its heroic phase, was initially applied to its southern parts and gained currency as the name of the entire range during the 18th century. It might have been a borrowing from either Turkic "stone belt '' (Bashkir, where the same name is used for the range), or Ob - Ugric. From the 13th century, in Bashkortostan there has been a legend about a hero named Ural. He sacrificed his life for the sake of his people and they poured a stone pile over his grave, which later turned into the Ural Mountains. Possibilities include Bashkir үр "elevation; upland '' '' or Mansi ур ала "mountain peak, top of the mountain '', Ostyak urr (chain of mountains). V.N. Tatischev believes that this oronym is set to "belt '' and associates it with the Turkic verb oralu - "gird ''. I.G. Dobrodomov suggests a transition from Aral to Ural explained on the basis of ancient Bulgar - Chuvash dialects. Geographer E.V. Hawks believes that the name goes back to the Bashkir folklore Ural - Batyr. Ethnographer E.N. Shumilov suggested a Mongolian origin, Khural Uul, that is, "meeting of the mountains ''. The Evenk geographical term era "mountain '' has also been theorized. Finno - Ugrist scholars consider Ural deriving from the Mansi word ' urr ' meaning a mountain. Turkologists, on the other hand, have achieved majority support for their assertion that ' ural ' in Tatar means a belt, and recall that an earlier name for the range was ' stone belt '.
As Middle - Eastern merchants traded with the Bashkirs and other people living on the western slopes of the Ural as far north as Great Perm, since at least the 10th century medieval mideastern geographers had been aware of the existence of the mountain range in its entirety, stretching as far as to the Arctic Ocean in the north. The first Russian mention of the mountains to the east of the East European Plain is provided by the Primary Chronicle, when it describes the Novgorodian expedition to the upper reaches of the Pechora in 1096. During the next few centuries Novgorodians engaged in fur trading with the local population and collected tribute from Yugra and Great Perm, slowly expanding southwards. The rivers Chusovaya and Belaya were first mentioned in the chronicles of 1396 and 1468, respectively. In 1430 the town of Solikamsk (Kama Salt) was founded on the Kama at the foothills of the Ural, where salt was produced in open pans. Ivan III of Moscow captured Perm, Pechora and Yugra from the declining Novgorod Republic in 1472. With the excursions of 1483 and 1499 -- 1500 across the Ural Moscow managed to subjugate Yugra completely.
Nevertheless, around that time early 16th century Polish geographer Maciej of Miechów in his influential Tractatus de duabus Sarmatiis (1517) argued that there were no mountains in Eastern Europe at all, challenging the point of view of some authors of Classical antiquity, popular during the Renaissance. Only after Sigismund von Herberstein in his Notes on Muscovite Affairs (1549) had reported, following Russian sources, that there are mountains behind the Pechora and identified them with the Ripheans and Hyperboreans of ancient authors, did the existence of the Ural, or at least of its northern part, become firmly established in the Western geography. The Middle and Southern Ural were still largely unavailable and unknown to the Russian or Western European geographers.
In the 1550s, after the Tsardom of Russia had defeated the Khanate of Kazan and proceeded to gradually annex the lands of the Bashkirs, the Russians finally reached the southern part of the mountain chain. In 1574 they founded Ufa. The upper reaches of the Kama and Chusovaya in the Middle Ural, still unexplored, as well as parts of Transuralia still held by the hostile Siberian Khanate, were granted to the Stroganovs by several decrees of the tsar in 1558 -- 1574. The Stroganovs ' land provided the staging ground for Yermak 's incursion into Siberia. Yermak crossed the Ural from the Chusovaya to the Tagil around 1581. In 1597 Babinov 's road was built across the Ural from Solikamsk to the valley of the Tura, where the town of Verkhoturye (Upper Tura) was founded in 1598. Customs was established in Verkhoturye shortly thereafter and the road was made the only legal connection between European Russia and Siberia for a long time. In 1648 the town of Kungur was founded at the western foothills of the Middle Ural. During the 17th century the first deposits of iron and copper ores, mica, gemstones and other minerals were discovered in the Ural.
Iron and copper smelting works emerged. They multiplied particularly quickly during the reign of Peter I of Russia. In 1720 -- 1722 he commissioned Vasily Tatishchev to oversee and develop the mining and smelting works in the Ural. Tatishchev proposed a new copper smelting factory in Yegoshikha, which would eventually become the core of the city of Perm and a new iron smelting factory on the Iset, which would become the largest in the world at the time of construction and give birth to the city of Yekaterinburg. Both factories were actually founded by Tatishchev 's successor, Georg Wilhelm de Gennin, in 1723. Tatishchev returned to the Ural on the order of Empress Anna to succeed de Gennin in 1734 -- 1737. Transportation of the output of the smelting works to the markets of European Russia necessitated the construction of the Siberian Route from Yekaterinburg across the Ural to Kungur and Yegoshikha (Perm) and further to Moscow, which was completed in 1763 and rendered Babinov 's road obsolete. In 1745 gold was discovered in the Ural at Beryozovskoye and later at other deposits. It has been mined since 1747.
The first ample geographic survey of the Ural Mountains was completed in the early 18th century by the Russian historian and geographer Vasily Tatishchev under the orders of Peter I. Earlier, in the 17th century, rich ore deposits were discovered in the mountains and their systematic extraction began in the early 18th century, eventually turning the region into the largest mineral base of Russia.
One of the first scientific descriptions of the mountains was published in 1770 -- 71. Over the next century, the region was studied by scientists from a number of countries, including Russia (geologist Alexander Karpinsky, botanist Porfiry Krylov and zoologist Leonid Sabaneyev), England (geologist Sir Roderick Murchison), France (paleontologist Edouard de Verneuil), and Germany (naturalist Alexander von Humboldt, geologist Alexander Keyserling). In 1845, Murchison, who had according to Encyclopædia Britannica "compiled the first geologic map of the Ural in 1841 '', published The Geology of Russia in Europe and the Ural Mountains with de Verneuil and Keyserling.
The first railway across the Ural had been built by 1878 and linked Perm to Yekaterinburg via Chusovoy, Kushva and Nizhny Tagil. In 1890 a railway linked Ufa and Chelyabinsk via Zlatoust. In 1896 this section became a part of the Trans - Siberian Railway. In 1909 yet another railway connecting Perm and Yekaterinburg passed through Kungur by the way of the Siberian Route. It has eventually replaced the Ufa -- Chelyabinsk section as the main trunk of the Trans - Siberian railway.
The highest peak of the Ural, Mount Narodnaya, (elevation 1,895 m (6,217 ft)) was identified in 1927.
During the Soviet industrialization in the 1930s the city of Magnitogorsk was founded in the South - Eastern Ural as a center of iron smelting and steelmaking. During the German invasion of the Soviet Union in 1941 -- 1942, the mountains became a key element in Nazi planning for the territories which they expected to conquer in the USSR. Faced with the threat of having a significant part of the Soviet territories occupied by the enemy, the government evacuated many of the industrial enterprises of European Russia and Ukraine to the eastern foothills of the Ural, considered a safe place out of reach of the German bombers and troops. Three giant tank factories were established at the Uralmash in Sverdlovsk (as Yekaterinburg used to be known), Uralvagonzavod in Nizhny Tagil, and Chelyabinsk Tractor Plant in Chelyabinsk. After the war, in 1947 -- 1948, Chum -- Labytnangi railway, built with the forced labor of Gulag inmates, crossed the Polar Ural.
Mayak, 150 km southeast of Yekaterinburg, was a center of the Soviet nuclear industry and site of the Kyshtym disaster.
The Ural Mountains extend about 2,500 km (1,600 mi) from the Kara Sea to the Kazakh Steppe along the northern border of Kazakhstan. Vaygach Island and the island of Novaya Zemlya form a further continuation of the chain on the north. Geographically this range marks the northern part of the border between the continents of Europe and Asia. Its highest peak is Mount Narodnaya, approximately 1,895 m (6,217 ft) in elevation.
By topography and other natural features, the Urals are divided, from north to south, into the Polar (or Arctic), Nether - Polar (or Sub-Arctic), Northern, Central and Southern parts. The Polar Urals extend for about 385 kilometers (239 mi) from Mount Konstantinov Kamen in the north to the Khulga River in the south; they have an area of about 25,000 km (9,700 sq mi) and a strongly dissected relief. The maximum height is 1,499 m (4,918 ft) at Payer Mountain and the average height is 1,000 to 1,100 m (3,300 to 3,600 ft).
The mountains of the Polar Ural have exposed rock with sharp ridges, though flattened or rounded tops are also found.
The Nether - Polar Ural are higher, and up to 150 km (93 mi) wider than the Polar Urals. They include the highest peaks of the range: Mount Narodnaya (1,895 m (6,217 ft)), Mount Karpinsky (1,878 m (6,161 ft)) and Manaraga (1,662 m (5,453 ft)). They extend for more than 225 km (140 mi) south to the Shchugor River. The many ridges are sawtooth shaped and dissected by river valleys. Both Polar and Nether - Polar Urals are typically Alpine; they bear traces of Pleistocene glaciation, along with permafrost and extensive modern glaciation, including 143 extant glaciers.
The Northern Ural consist of a series of parallel ridges up to 1,000 -- 1,200 m (3,300 -- 3,900 ft) in height and longitudinal hollows. They are elongated from north to south and stretch for about 560 km (350 mi) from the Usa River. Most of the tops are flattened, but those of the highest mountains, such as Telposiz, 1,617 m (5,305 ft) and Konzhakovsky Stone, 1,569 m (5,148 ft) have a dissected topography. Intensive weathering has produced vast areas of eroded stone on the mountain slopes and summits of the northern areas.
The Central Ural are the lowest part of the Ural, with smooth mountain tops, the highest mountain being 994 m (3,261 ft) (Basegi); they extend south from the Ufa River.
The relief of the Southern Ural is more complex, with numerous valleys and parallel ridges directed south - west and meridionally. The range includes the Ilmensky Mountains separated from the main ridges by the Miass River. The maximum height is 1,640 m (5,380 ft) (Mount Yamantau) and the width reaches 250 km (160 mi). Other notable peaks lie along the Iremel mountain ridge (Bolshoy Iremel and Maly Iremel). The Southern Urals extend some 550 km (340 mi) up to the sharp westward bend of the Ural River and terminate in the wide Mughalzhar Hills.
The Urals are among the world 's oldest extant mountain ranges. For its age of 250 to 300 million years, the elevation of the mountains is unusually high. They were formed during the Uralian orogeny due to the collision of the eastern edge of the supercontinent Laurussia with the young and rheologically weak continent of Kazakhstania, which now underlies much of Kazakhstan and West Siberia west of the Irtysh, and intervening island arcs. The collision lasted nearly 90 million years in the late Carboniferous -- early Triassic. Unlike the other major orogens of the Paleozoic (Appalachians, Caledonides, Variscides), the Urals have not undergone post-orogenic extensional collapse and are unusually well preserved for their age, being underlaid by a pronounced crustal root. East and south of the Urals much of the orogen is buried beneath later Mesozoic and Cenozoic sediments. The adjacent Pay - Khoy Ridge to the north and Novaya Zemlya are not a part of the Uralian orogen and formed later.
Many deformed and metamorphosed rocks, mostly of Paleozoic age, surface within the Urals. The sedimentary and volcanic layers are folded and faulted. The sediments to the west of the Ural Mountains are formed of limestone, dolomite and sandstone left from ancient shallow seas. The eastern side is dominated by basalts.
The western slope of the Ural Mountains has predominantly karst topography, especially in the Sylva River basin, which is a tributary of the Chusovaya River. It is composed of severely eroded sedimentary rocks (sandstones and limestones) that are about 350 million years old. There are many caves, sinkholes and underground streams. The karst topography is much less developed on the eastern slopes. The eastern slopes are relatively flat, with some hills and rocky outcrops and contain alternating volcanic and sedimentary layers dated to the middle Paleozoic Era. Most high mountains consist of weather - resistant rocks such as quartzite, schist and gabbro that are between 570 and 395 million years old. The river valleys are underlain by limestone.
The Ural Mountains contain about 48 species of economically valuable ores and economically valuable minerals. Eastern regions are rich in chalcopyrite, nickel oxide, gold, platinum, chromite and magnetite ores, as well as in coal (Chelyabinsk Oblast), bauxite, talc, fireclay and abrasives. The Western Urals contain deposits of coal, oil, natural gas (Ishimbay and Krasnokamsk areas) and potassium salts. Both slopes are rich in bituminous coal and lignite, and the largest deposit of bituminous coal is in the north (Pechora field). The specialty of the Urals is precious and semi-precious stones, such as emerald, amethyst, aquamarine, jasper, rhodonite, malachite and diamond. Some of the deposits, such as the magnetite ores at Magnitogorsk, are already nearly depleted.
Many rivers originate in the Ural Mountains. The western slopes south of the border between the Komi Republic and Perm Krai and the eastern slopes south of approximately 54 ° 30'N drain into the Caspian Sea via the Kama and Ural River basins. The tributaries of the Kama include the Vishera, Chusovaya, and Belaya and originate on both the eastern and western slopes. The rest of the Urals drain into the Arctic Ocean, mainly via the Pechora basin in the west, which includes the Ilych, Shchugor, and the Usa, and via the Ob basin in the east, which includes the Tobol, Tavda, Iset, Tura and Severnaya Sosva. The rivers are frozen for more than half the year. Generally, the western rivers have higher flow volume than the eastern ones, especially in the Northern and Nether - Polar regions. Rivers are slower in the Southern Urals. This is because of low precipitation and the relatively warm climate resulting in less snow and more evaporation.
The mountains contain a number of deep lakes. The eastern slopes of the Southern and Central Urals have most of these, among the largest of which are the Uvildy, Itkul, Turgoyak, and Tavatuy lakes. The lakes found on the western slopes are less numerous and also smaller. Lake Bolshoye Shchuchye, the deepest lake in the Polar Urals, is 136 meters (446 ft) deep. Other lakes, too, are found in the glacial valleys of this region. Spas and sanatoriums have been built to take advantage of the medicinal muds found in some of the mountain lakes.
The climate of the Urals is continental. The mountain ridges, elongated from north to south, effectively absorb sunlight thereby increasing the temperature. The areas west of the Ural Mountains are 1 -- 2 ° C (1.8 -- 3.6 ° F) warmer in winter than the eastern regions because the former are warmed by Atlantic winds whereas the eastern slopes are chilled by Siberian air masses. The average January temperatures increase in the western areas from − 20 ° C (− 4 ° F) in the Polar to − 15 ° C (5 ° F) in the Southern Urals and the corresponding temperatures in July are 10 ° C (50 ° F) and 20 ° C (68 ° F). The western areas also receive more rainfall than the eastern ones by 150 -- 300 mm (5.9 -- 11.8 in) per year. This is because the mountains trap clouds from the Atlantic Ocean. The highest precipitation, approximately 1,000 mm (39 in), is in the Northern Urals with up to 1,000 cm (390 in) snow. The eastern areas receive from 500 -- 600 mm (20 -- 24 in) in the north to 300 -- 400 mm (12 -- 16 in) in the south. Maximum precipitation occurs in the summer: the winter is dry because of the Siberian High.
The landscapes of the Urals vary with both latitude and longitude and are dominated by forests and steppes. The southern area of the Mughalzhar Hills is a semidesert. Steppes lie mostly in the southern and especially south - eastern Urals. Meadow steppes have developed on the lower parts of mountain slopes and are covered with zigzag and mountain clovers, Serratula gmelinii, dropwort, meadow - grass and Bromus inermis, reaching the height of 60 -- 80 cm. Much of the land is cultivated. To the south, the meadow steppes become more sparse, dry and low. The steep gravelly slopes of the mountains and hills of the eastern slopes of the Southern Urals are mostly covered with rocky steppes. River valleys contain willow, poplar and caragana shrubs.
Forest landscapes of the Urals are diverse, especially in the southern part. The western areas are dominated by dark coniferous taiga forests which change to mixed and deciduous forests in the south. The eastern mountain slopes have light coniferous taiga forests. The Northern Urals are dominated by conifers, namely Siberian fir, Siberian pine, Scots pine, Siberian spruce, Norway spruce and Siberian larch, as well as by silver and downy birches. The forests are much sparser in the Polar Urals. Whereas in other parts of the Ural Mountains they grow up to an altitude of 1000 m, in the Polar Urals the tree line is at 250 -- 400 m. The polar forests are low and are mixed with swamps, lichens, bogs and shrubs. Dwarf birch, mosses and berries (blueberry, cloudberry, black crowberry, etc.) are abundant. The forests of the Southern Urals are the most diverse in composition: here, together with coniferous forests are also abundant broadleaf tree species such as English oak, Norway maple and elm. The Virgin Komi Forests in the northern Urals are recognized as a World Heritage site.
The Ural forests are inhabited by animals typical of Siberia, such as elk, brown bear, fox, wolf, wolverine, lynx, squirrel, and sable (north only). Because of the easy accessibility of the mountains there are no specifically mountainous species. In the Middle Urals, one can see a rare mixture of sable and pine marten named kidus. In the Southern Urals, badger and black polecat are common. Reptiles and amphibians live mostly in the Southern and Central Ural and are represented by the common viper, lizards and grass snakes. Bird species are represented by capercaillie, black grouse, hazel grouse, spotted nutcracker, and cuckoos. In summers, the South and Middle Urals are visited by songbirds, such as nightingale and redstart.
The steppes of the Southern Urals are dominated by hares and rodents such as gophers, susliks, and jerboa. There are many birds of prey such as lesser kestrel and buzzards. The animals of the Polar Urals are few and are characteristic of the tundra; they include Arctic fox, tundra partridge, lemming, and reindeer. The birds of these areas include rough - legged buzzard, snowy owl, and rock ptarmigan.
The continuous and intensive economic development of the last centuries has affected the fauna, and wildlife is much diminished around all industrial centers. During World War II, hundreds of factories were evacuated from Western Russia before the German occupation, flooding the Urals with industry. The conservation measures include establishing national wildlife parks. There are nine strict nature reserves in the Urals: the Ilmen, the oldest one, mineralogical reserve founded in 1920 in Chelyabinsk Oblast, Pechora - Ilych in the Komi Republic, Bashkir and its former branch Shulgan - Tash in Bashkortostan, Visim in Sverdlovsk Oblast, Southern Ural in Bashkortostan, Basegi in Perm Krai, Vishera in Perm Krai and Denezhkin Kamen in Sverdlovsk Oblast.
The area has also been severely damaged by the plutonium - producing facility Mayak opened in Chelyabinsk - 40 (later called Chelyabinsk - 65, Ozyorsk), in the Southern Ural, after World War II. Its plants went into operation in 1948 and, for the first ten years, dumped unfiltered radioactive waste into the Techa River and Lake Karachay. In 1990, efforts were underway to contain the radiation in one of the lakes, which was estimated at the time to expose visitors to 500 millirem per day. As of 2006, 500 mrem in the natural environment was the upper limit of exposure considered safe for a member of the general public in an entire year (though workplace exposure over a year could exceed that by a factor of 10). Over 23,000 km (8,900 sq mi) of land were contaminated in 1957 from a storage tank explosion, only one of several serious accidents that further polluted the region. The 1957 accident expelled 20 million curies of radioactive material, 90 % of which settled into the land immediately around the facility. Although some reactors of Mayak were shut down in 1987 and 1990, the facility keeps producing plutonium.
The Urals have been viewed by Russians as a "treasure box '' of mineral resources, which were the basis for its extensive industrial development. In addition to iron and copper the Urals were a source of gold, malachite, alexandrite, and other gems such as those used by the court jeweller Fabergé. As Russians in other regions gather mushrooms or berries, Uralians gather mineral specimens and gems. Dmitry Mamin - Sibiryak (1852 -- 1912) Pavel Bazhov (1879 -- 1950), as well as Aleksey Ivanov and Olga Slavnikova, post-Soviet writers, have written of the region.
The region served as a military stronghold during Peter the Great 's Great Northern War with Sweden, during Stalin 's rule when the Magnitogorsk Metallurgical Complex was built and Russian industry relocated to the Urals during the Nazi advance at the beginning of World War II, and as the center of the Soviet nuclear industry during the Cold War. Extreme levels of air, water, and radiological contamination and pollution by industrial wastes resulted. Population exodus resulted, and economic depression at the time of the collapse of the Soviet Union, but in post-Soviet times additional mineral exploration, particularly in the northern Urals, has been productive and the region has attracted industrial investment.
|
identify and describe motion relative to different frames of reference | Frame of reference - wikipedia
In physics, a frame of reference (or reference frame) consists of an abstract coordinate system and the set of physical reference points that uniquely fix (locate and orient) the coordinate system and standardize measurements.
In n dimensions, n + 1 reference points are sufficient to fully define a reference frame. Using rectangular (Cartesian) coordinates, a reference frame may be defined with a reference point at the origin and a reference point at one unit distance along each of the n coordinate axes.
In Einsteinian relativity, reference frames are used to specify the relationship between a moving observer and the phenomenon or phenomena under observation. In this context, the phrase often becomes "observational frame of reference '' (or "observational reference frame ''), which implies that the observer is at rest in the frame, although not necessarily located at its origin. A relativistic reference frame includes (or implies) the coordinate time, which does not correspond across different frames moving relatively to each other. The situation thus differs from Galilean relativity, where all possible coordinate times are essentially equivalent.
The need to distinguish between the various meanings of "frame of reference '' has led to a variety of terms. For example, sometimes the type of coordinate system is attached as a modifier, as in Cartesian frame of reference. Sometimes the state of motion is emphasized, as in rotating frame of reference. Sometimes the way it transforms to frames considered as related is emphasized as in Galilean frame of reference. Sometimes frames are distinguished by the scale of their observations, as in macroscopic and microscopic frames of reference.
In this article, the term observational frame of reference is used when emphasis is upon the state of motion rather than upon the coordinate choice or the character of the observations or observational apparatus. In this sense, an observational frame of reference allows study of the effect of motion upon an entire family of coordinate systems that could be attached to this frame. On the other hand, a coordinate system may be employed for many purposes where the state of motion is not the primary concern. For example, a coordinate system may be adopted to take advantage of the symmetry of a system. In a still broader perspective, the formulation of many problems in physics employs generalized coordinates, normal modes or eigenvectors, which are only indirectly related to space and time. It seems useful to divorce the various aspects of a reference frame for the discussion below. We therefore take observational frames of reference, coordinate systems, and observational equipment as independent concepts, separated as below:
Here is a quotation applicable to moving observational frames R (\ displaystyle (\ mathfrak (R))) and various associated Euclidean three - space coordinate systems (R, R ′, etc.):
and this on the utility of separating the notions of R (\ displaystyle (\ mathfrak (R))) and (R, R ′, etc.):
and this, also on the distinction between R (\ displaystyle (\ mathfrak (R))) and (R, R ′, etc.):
and from J.D. Norton:
The discussion is taken beyond simple space - time coordinate systems by Brading and Castellani. Extension to coordinate systems using generalized coordinates underlies the Hamiltonian and Lagrangian formulations of quantum field theory, classical relativistic mechanics, and quantum gravity.
Although the term "coordinate system '' is often used (particularly by physicists) in a nontechnical sense, the term "coordinate system '' does have a precise meaning in mathematics, and sometimes that is what the physicist means as well.
A coordinate system in mathematics is a facet of geometry or of algebra, in particular, a property of manifolds (for example, in physics, configuration spaces or phase spaces). The coordinates of a point r in an n - dimensional space are simply an ordered set of n numbers:
In a general Banach space, these numbers could be (for example) coefficients in a functional expansion like a Fourier series. In a physical problem, they could be spacetime coordinates or normal mode amplitudes. In a robot design, they could be angles of relative rotations, linear displacements, or deformations of joints. Here we will suppose these coordinates can be related to a Cartesian coordinate system by a set of functions:
where x, y, z, etc. are the n Cartesian coordinates of the point. Given these functions, coordinate surfaces are defined by the relations:
The intersection of these surfaces define coordinate lines. At any selected point, tangents to the intersecting coordinate lines at that point define a set of basis vectors (e, e,..., e) at that point. That is:
which can be normalized to be of unit length. For more detail see curvilinear coordinates.
Coordinate surfaces, coordinate lines, and basis vectors are components of a coordinate system. If the basis vectors are orthogonal at every point, the coordinate system is an orthogonal coordinate system.
An important aspect of a coordinate system is its metric tensor g, which determines the arc length ds in the coordinate system in terms of its coordinates:
where repeated indices are summed over.
As is apparent from these remarks, a coordinate system is a mathematical construct, part of an axiomatic system. There is no necessary connection between coordinate systems and physical motion (or any other aspect of reality). However, coordinate systems can include time as a coordinate, and can be used to describe motion. Thus, Lorentz transformations and Galilean transformations may be viewed as coordinate transformations.
General and specific topics of coordinate systems can be pursued following the See also links below.
An observational frame of reference, often referred to as a physical frame of reference, a frame of reference, or simply a frame, is a physical concept related to an observer and the observer 's state of motion. Here we adopt the view expressed by Kumar and Barve: an observational frame of reference is characterized only by its state of motion. However, there is lack of unanimity on this point. In special relativity, the distinction is sometimes made between an observer and a frame. According to this view, a frame is an observer plus a coordinate lattice constructed to be an orthonormal right - handed set of spacelike vectors perpendicular to a timelike vector. See Doran. This restricted view is not used here, and is not universally adopted even in discussions of relativity. In general relativity the use of general coordinate systems is common (see, for example, the Schwarzschild solution for the gravitational field outside an isolated sphere).
There are two types of observational reference frame: inertial and non-inertial. An inertial frame of reference is defined as one in which all laws of physics take on their simplest form. In special relativity these frames are related by Lorentz transformations, which are parametrized by rapidity. In Newtonian mechanics, a more restricted definition requires only that Newton 's first law holds true; that is, a Newtonian inertial frame is one in which a free particle travels in a straight line at constant speed, or is at rest. These frames are related by Galilean transformations. These relativistic and Newtonian transformations are expressed in spaces of general dimension in terms of representations of the Poincaré group and of the Galilean group.
In contrast to the inertial frame, a non-inertial frame of reference is one in which fictitious forces must be invoked to explain observations. An example is an observational frame of reference centered at a point on the Earth 's surface. This frame of reference orbits around the center of the Earth, which introduces the fictitious forces known as the Coriolis force, centrifugal force, and gravitational force. (All of these forces including gravity disappear in a truly inertial reference frame, which is one of free - fall.)
A further aspect of a frame of reference is the role of the measurement apparatus (for example, clocks and rods) attached to the frame (see Norton quote above). This question is not addressed in this article, and is of particular interest in quantum mechanics, where the relation between observer and measurement is still under discussion (see measurement problem).
In physics experiments, the frame of reference in which the laboratory measurement devices are at rest is usually referred to as the laboratory frame or simply "lab frame. '' An example would be the frame in which the detectors for a particle accelerator are at rest. The lab frame in some experiments is an inertial frame, but it is not required to be (for example the laboratory on the surface of the Earth in many physics experiments is not inertial). In particle physics experiments, it is often useful to transform energies and momenta of particles from the lab frame where they are measured, to the center of momentum frame "COM frame '' in which calculations are sometimes simplified, since potentially all kinetic energy still present in the COM frame may be used for making new particles.
In this connection it may be noted that the clocks and rods often used to describe observers ' measurement equipment in thought, in practice are replaced by a much more complicated and indirect metrology that is connected to the nature of the vacuum, and uses atomic clocks that operate according to the standard model and that must be corrected for gravitational time dilation. (See second, meter and kilogram).
In fact, Einstein felt that clocks and rods were merely expedient measuring devices and they should be replaced by more fundamental entities based upon, for example, atoms and molecules.
Consider a situation common in everyday life. Two cars travel along a road, both moving at constant velocities. See Figure 1. At some particular moment, they are separated by 200 metres. The car in front is travelling at 22 metres per second and the car behind is travelling at 30 metres per second. If we want to find out how long it will take the second car to catch up with the first, there are three obvious "frames of reference '' that we could choose.
First, we could observe the two cars from the side of the road. We define our "frame of reference '' S as follows. We stand on the side of the road and start a stop - clock at the exact moment that the second car passes us, which happens to be when they are a distance d = 200 m apart. Since neither of the cars is accelerating, we can determine their positions by the following formulas, where x 1 (t) (\ displaystyle x_ (1) (t)) is the position in meters of car one after time t in seconds and x 2 (t) (\ displaystyle x_ (2) (t)) is the position of car two after time t.
Notice that these formulas predict at t = 0 s the first car is 200 m down the road and the second car is right beside us, as expected. We want to find the time at which x 1 = x 2 (\ displaystyle x_ (1) = x_ (2)). Therefore, we set x 1 = x 2 (\ displaystyle x_ (1) = x_ (2)) and solve for t (\ displaystyle t), that is:
Alternatively, we could choose a frame of reference S ′ situated in the first car. In this case, the first car is stationary and the second car is approaching from behind at a speed of v − v = 8 m / s. In order to catch up to the first car, it will take a time of d / v − v = 200 / 8 s, that is, 25 seconds, as before. Note how much easier the problem becomes by choosing a suitable frame of reference. The third possible frame of reference would be attached to the second car. That example resembles the case just discussed, except the second car is stationary and the first car moves backward towards it at 8 m / s.
It would have been possible to choose a rotating, accelerating frame of reference, moving in a complicated manner, but this would have served to complicate the problem unnecessarily. It is also necessary to note that one is able to convert measurements made in one coordinate system to another. For example, suppose that your watch is running five minutes fast compared to the local standard time. If you know that this is the case, when somebody asks you what time it is, you are able to deduct five minutes from the time displayed on your watch in order to obtain the correct time. The measurements that an observer makes about a system depend therefore on the observer 's frame of reference (you might say that the bus arrived at 5 past three, when in fact it arrived at three).
For a simple example involving only the orientation of two observers, consider two people standing, facing each other on either side of a north - south street. See Figure 2. A car drives past them heading south. For the person facing east, the car was moving towards the right. However, for the person facing west, the car was moving toward the left. This discrepancy is because the two people used two different frames of reference from which to investigate this system.
For a more complex example involving observers in relative motion, consider Alfred, who is standing on the side of a road watching a car drive past him from left to right. In his frame of reference, Alfred defines the spot where he is standing as the origin, the road as the x-axis and the direction in front of him as the positive y - axis. To him, the car moves along the x axis with some velocity v in the positive x-direction. Alfred 's frame of reference is considered an inertial frame of reference because he is not accelerating (ignoring effects such as Earth 's rotation and gravity).
Now consider Betsy, the person driving the car. Betsy, in choosing her frame of reference, defines her location as the origin, the direction to her right as the positive x-axis, and the direction in front of her as the positive y - axis. In this frame of reference, it is Betsy who is stationary and the world around her that is moving -- for instance, as she drives past Alfred, she observes him moving with velocity v in the negative y - direction. If she is driving north, then north is the positive y - direction; if she turns east, east becomes the positive y - direction.
Finally, as an example of non-inertial observers, assume Candace is accelerating her car. As she passes by him, Alfred measures her acceleration and finds it to be a in the negative x-direction. Assuming Candace 's acceleration is constant, what acceleration does Betsy measure? If Betsy 's velocity v is constant, she is in an inertial frame of reference, and she will find the acceleration to be the same as Alfred in her frame of reference, a in the negative y - direction. However, if she is accelerating at rate A in the negative y - direction (in other words, slowing down), she will find Candace 's acceleration to be a ′ = a − A in the negative y - direction - a smaller value than Alfred has measured. Similarly, if she is accelerating at rate A in the positive y - direction (speeding up), she will observe Candace 's acceleration as a ′ = a + A in the negative y - direction -- a larger value than Alfred 's measurement.
Frames of reference are especially important in special relativity, because when a frame of reference is moving at some significant fraction of the speed of light, then the flow of time in that frame does not necessarily apply in another frame. The speed of light is considered to be the only true constant between moving frames of reference.
It is important to note some assumptions made above about the various inertial frames of reference. Newton, for instance, employed universal time, as explained by the following example. Suppose that you own two clocks, which both tick at exactly the same rate. You synchronize them so that they both display exactly the same time. The two clocks are now separated and one clock is on a fast moving train, traveling at constant velocity towards the other. According to Newton, these two clocks will still tick at the same rate and will both show the same time. Newton says that the rate of time as measured in one frame of reference should be the same as the rate of time in another. That is, there exists a "universal '' time and all other times in all other frames of reference will run at the same rate as this universal time irrespective of their position and velocity. This concept of time and simultaneity was later generalized by Einstein in his special theory of relativity (1905) where he developed transformations between inertial frames of reference based upon the universal nature of physical laws and their economy of expression (Lorentz transformations).
It is also important to note that the definition of inertial reference frame can be extended beyond three - dimensional Euclidean space. Newton 's assumed a Euclidean space, but general relativity uses a more general geometry. As an example of why this is important, let us consider the geometry of an ellipsoid. In this geometry, a "free '' particle is defined as one at rest or traveling at constant speed on a geodesic path. Two free particles may begin at the same point on the surface, traveling with the same constant speed in different directions. After a length of time, the two particles collide at the opposite side of the ellipsoid. Both "free '' particles traveled with a constant speed, satisfying the definition that no forces were acting. No acceleration occurred and so Newton 's first law held true. This means that the particles were in inertial frames of reference. Since no forces were acting, it was the geometry of the situation which caused the two particles to meet each other again. In a similar way, it is now common to describe that we exist in a four - dimensional geometry known as spacetime. In this picture, the curvature of this 4D space is responsible for the way in which two bodies with mass are drawn together even if no forces are acting. This curvature of spacetime replaces the force known as gravity in Newtonian mechanics and special relativity.
Here the relation between inertial and non-inertial observational frames of reference is considered. The basic difference between these frames is the need in non-inertial frames for fictitious forces, as described below.
An accelerated frame of reference is often delineated as being the "primed '' frame, and all variables that are dependent on that frame are notated with primes, e.g. x ′, y ′, a ′.
The vector from the origin of an inertial reference frame to the origin of an accelerated reference frame is commonly notated as R. Given a point of interest that exists in both frames, the vector from the inertial origin to the point is called r, and the vector from the accelerated origin to the point is called r ′. From the geometry of the situation, we get
Taking the first and second derivatives of this with respect to time, we obtain
where V and A are the velocity and acceleration of the accelerated system with respect to the inertial system and v and a are the velocity and acceleration of the point of interest with respect to the inertial frame.
These equations allow transformations between the two coordinate systems; for example, we can now write Newton 's second law as
When there is accelerated motion due to a force being exerted there is manifestation of inertia. If an electric car designed to recharge its battery system when decelerating is switched to braking, the batteries are recharged, illustrating the physical strength of manifestation of inertia. However, the manifestation of inertia does not prevent acceleration (or deceleration), for manifestation of inertia occurs in response to change in velocity due to a force. Seen from the perspective of a rotating frame of reference the manifestation of inertia appears to exert a force (either in centrifugal direction, or in a direction orthogonal to an object 's motion, the Coriolis effect).
A common sort of accelerated reference frame is a frame that is both rotating and translating (an example is a frame of reference attached to a CD which is playing while the player is carried). This arrangement leads to the equation (see Fictitious force for a derivation):
or, to solve for the acceleration in the accelerated frame,
Multiplying through by the mass m gives
where
|
who is the maya king depicted in the inscription of the temple of the tree of yellow corn | Temple of the Inscriptions - wikipedia
The Temple of the Inscriptions (Classic Maya: Bʼolon Yej Teʼ Naah (Mayan pronunciation: (ɓolon jex teʔ naːh)) "House of the Nine Sharpened Spears '') is the largest Mesoamerican stepped pyramid structure at the pre-Columbian Maya civilization site of Palenque, located in the modern - day state of Chiapas, Mexico. The structure was specifically built as the funerary monument for K'inich Janaab ' Pakal, ajaw or ruler of Palenque in the 7th century whose reign over the polity lasted almost 70 years. Construction of this monument commenced in the last decade of his life, and was completed by his son and successor K'inich Kan B'alam II. Within Palenque, the Temple of the Inscriptions is located in an area known as the Temple of the Inscriptions ' Court and stands at a right angle to the Southeast of the Palace. The Temple of the Inscriptions has been significant in the study of the ancient Maya, owing to the extraordinary sample of hieroglyphic text found on the Inscription Tablets, the impressive sculptural panels on the piers of the building, and the finds inside the tomb of Pakal.
The structure consists of a "temple '' structure that sits atop an eight - stepped pyramid (for a total of nine levels). The five entrances in the front of the building are surrounded by piers bearing both carved images and the hieroglyphic texts in Maya script for which the temple was named. Inside the temple, a stairway leads to the crypt containing the sarcophagus of Pakal.
The Temple of Inscriptions was finished a short time after 683. The construction was initiated by Pakal himself, although his son, K'inich Kan B'alam II completed the structure and its final decoration.
Despite the fact that Palenque, and the Temple of Inscriptions itself, had been visited and studied for more than two hundred years, the tomb of Pakal was not discovered until 1952. Alberto Ruz Lhuillier, a Mexican archaeologist, removed a stone slab from the floor of the temple, revealing a stairway filled with rubble. Two years later, when the stairway was cleared, it was discovered that it led into Pakal 's tomb.
The temple has six piers, or vertical panels. These are labeled A through F, each with texts, artistic representations, or both executed in reliefs made from plaster stucco. Piers A and F have only hieroglyphic text on them. Piers B through E have images of people holding an infant - like figure, which has a snake as one leg.
Pier A 's decoration consists entirely of hieroglyphic text. However, only eleven glyphs and glyph portions survive to this day. Among these glyphs, "capture '' can be clearly seen, but who or what was captured is unknown because the corresponding glyphs are unreadable.
Pier B depicts a scene in which a human figure holds the "child '' God K, one of whose legs is a serpent, in his hand.
The human figure is actually life size (165 cm tall), but its position and perspective make it appear much larger. It wears an elaborate feather headdress, a jaguar skin skirt, and a belt. The figure also used to wear a loincloth and a short beaded cape, but due to damage those are largely missing today, as is the head of the figure.
It is thought that the figure held by the human figure is God K, although his characteristic "flared forehead '' is only visible on Pier D. The figure of God K, often described as an "infant '' or "child, '' has one human leg and one serpent - leg. The human leg ends in a six - toed foot that is cradled by the other figure. It is likely, especially considering the emphasis placed on the polydactyly, that this feature is a reference to Pakal 's son, Kan B'alam II, who is portrayed in portraits with six fingers on one hand and six toes on one foot.
The standing figure on Pier C is thought to be a woman, possibly Pakal 's mother, Lady Zac - Kuk. The appearance of the psychoduct (a hollow duct that goes from the outer temple into the tomb of Pakal) and the stone band that connects to it have led many to compare the structure to an umbilical cord. The fact that this "umbilical cord '' connects the figure on Pier C to Pakal 's tomb (and by extension, Pakal himself) supports the identification of the figure as Lady Zac - Kuk. The umbilical cord can then be interpreted as a reference to the royal bloodline.
Pier D provides the evidence that the "baby '' figure is, in fact, God K. In this depiction of the "baby '' figure, it wears an "axe '' or "flare '' including a mirror (visible below the feathers of the standing figure 's headdress), something characteristic of God K. The figure on this pier is more complete than the same figure on any of the other piers. Also present in the depiction of God K are three vertical cuts on the god 's back. These have been shown to be intentional, but their meaning is still unknown.
The standing figure on Pier E is most likely Kan B'alam I. The elaborate headdress worn by the figure contains glyphs that identify him as "chan - bahlum. '' It is unlikely that this refers to Kan B'alam II because he is thought to be represented by the figure of God K. Because Kan B'alam II, great - great - grandson of Kan B'alam I, finished the decoration of the Temple of Inscriptions, this can be seen as an effort to reinforce the legitimacy of his claim to the throne; he is emphasizing his relationship to his ancestor and namesake, as well as his relationship to his father and grandmother.
Pier F has only one glyph block that remains today. It contains glyphs for what is thought to be a title, translated as "dead rabbit '', followed by the title and name "Kinich Kan - B'alam, '' after which comes an unknown glyph (possibly another title), and the glyph for Palenque.
Although much of the color on the piers has deteriorated, some is still visible today. Originally, the piers would have been extraordinarily colorful. Bright red, yellow, and blue would have been seen on their stucco sculpture. A thin coat of light red paint would have been applied to all of the stucco sculpture as a sort of background coloring while the stucco was still wet, binding the color to the building. Because the temple was repeatedly repainted, one can observe layers of pigment between layers of stucco. The color blue signified the Heavens and the Gods and would have been applied to things relating to the gods, as well as the glyphic texts on the sculpture. The color yellow related to Xibalba, the Maya underworld, which was associated with jaguars, so the jaguar skirts were colored accordingly.
The Temple of Inscriptions gets its name from three hieroglyphic tablets, known as the East Tablet, the Central Tablet, and the West Tablet, on the temple 's inner walls. These tablets emphasize the idea that events that happened in the past will be repeated on the same calendar date, a theme also found in the Books of Chilam Balam, and constitute one of the longest known Maya inscriptions (617 glyphs). Columns E through F mark the beginning of a record of various events in Pakal 's life that continues until the last two columns on the tablets, which announce his death and name Kan B'alam II as his heir. All of the tablets, excluding the final two columns, were completed during Pakal 's lifetime.
To prevent the collapse of the tomb due to the immense weight of the pyramid, the architects designed the hut - shaped chamber using cross vaulting and recessed buttresses.
The tomb of Pakal yielded several important archaeological finds and works of art.
Among these finds was the lid of Pakal 's sarcophagus. In the image that covers it, Pakal lies on top of the "earth monster. '' Below him are the open jaws of a jaguar, symbolizing Xibalba. Above him is the Celestial Bird, perched atop the Cosmic Tree (represented by a cross) which, in turn, holds a Serpent in its branches. Thus, in the image Pakal lies between two worlds: the heavens and the underworld. Also on the sarcophagus are Pakal 's ancestors, arraigned in a line going back six generations.
Pakal 's death mask is another extraordinary artifact found in the tomb. The face of the mask is made entirely of jade, while the eyes consist of shells, mother of pearl, and obsidian.
There were several smaller jade heads packed into Pakal 's sarcophagus and a stucco portrait of the king was found under the base of it.
Five skeletons, both male and female, were found at the entrance of the crypt. These sacrificial victims were intended to follow Pakal into Xibalba.
All information on the piers was taken from Robertson 1983: 29 - 53.
Coordinates: 17 ° 29 ′ 01 '' N 92 ° 02 ′ 48 '' W / 17.4836 ° N 92.0468 ° W / 17.4836; - 92.0468
|
what are visual auditory and somatosensory receptive fields | Receptive field - wikipedia
The receptive field of an individual sensory neuron is the particular region of the sensory space (e.g., the body surface, or the visual field) in which a stimulus will modify the firing of that neuron. This region can be a hair in the cochlea or a piece of skin, retina, tongue or other part of an animal 's body. Additionally, it can be the space surrounding an animal, such as an area of auditory space that is fixed in a reference system based on the ears but that moves with the animal as it moves (the space inside the ears), or in a fixed location in space that is largely independent of the animal 's location (place cells). Receptive fields have been identified for neurons of the auditory system, the somatosensory system, and the visual system.
The term receptive field was first used by Sherrington (1906) to describe the area of skin from which a scratch reflex could be elicited in a dog. According to Alonso and Chen (2008) it was Hartline (1938) who applied the term to single neurons, in this case from the retina of a frog.
The concept of receptive fields can be extended further up the nervous system; if many sensory receptors all form synapses with a single cell further up, they collectively form the receptive field of that cell. For example, the receptive field of a ganglion cell in the retina of the eye is composed of input from all of the photoreceptors which synapse with it, and a group of ganglion cells in turn forms the receptive field for a cell in the brain. This process is called convergence.
The auditory system processes the temporal and spectral (i.e. frequency) characteristics of sound waves, so the receptive fields of neurons in the auditory system are modeled as spectro - temporal patterns that cause the firing rate of the neuron to modulate with the auditory stimulus. Auditory receptive fields are often modeled as spectro - temporal receptive fields (STRFs), which are the specific pattern in the auditory domain that causes modulation of the firing rate of a neuron. Linear STRFs are created by first calculating a spectrogram of the acoustic stimulus, which determines the how the spectral density of the acoustic stimulus changes over time, often using the Short - time Fourier transform (STFT). Firing rate is modeled over time for the neuron, possibly using a peristimulus time histogram if combining over multiple repetitions of the acoustic stimulus. Then, linear regression is used to predict the firing rate of that neuron as a weighted sum of the spectrogram. The weights learned by the linear model are the STRF, and represent the specific acoustic pattern that causes modulation in the firing rate of the neuron. STRFs can also be understood as the transfer function that maps an acoustic stimulus input to a firing rate response output.
In the somatosensory system, receptive fields are regions of the skin or of internal organs. Some types of mechanoreceptors have large receptive fields, while others have smaller ones.
Large receptive fields allow the cell to detect changes over a wider area, but lead to a less precise perception. Thus, the fingers, which require the ability to detect fine detail, have many, densely packed (up to 500 per cubic cm) mechanoreceptors with small receptive fields (around 10 square mm), while the back and legs, for example, have fewer receptors with large receptive fields. Receptors with large receptive fields usually have a "hot spot '', an area within the receptive field (usually in the center, directly over the receptor) where stimulation produces the most intense response.
Tactile - sense - related cortical neurons have receptive fields on the skin that can be modified by experience or by injury to sensory nerves resulting in changes in the field 's size and position. In general these neurons have relatively large receptive fields (much larger than those of dorsal root ganglion cells). However, the neurons are able to discriminate fine detail due to patterns of excitation and inhibition relative to the field which leads to spatial resolution.
r e c e p t i v e f i e l d = c e n t e r + s u r r o u n d (\ displaystyle receptive \ field = center + surround)
In the visual system, receptive fields are volumes in visual space. They are smallest in the fovea where they can be a few minutes of arc like a dot on this page, to the whole page. For example, the receptive field of a single photoreceptor is a cone - shaped volume comprising all the visual directions in which light will alter the firing of that cell. Its apex is located in the center of the lens and its base essentially at infinity in visual space. Traditionally, visual receptive fields were portrayed in two dimensions (e.g., as circles, squares, or rectangles), but these are simply slices, cut along the screen on which the researcher presented the stimulus, of the volume of space to which a particular cell will respond. In the case of binocular neurons in the visual cortex, receptive fields do not extend to optical infinity. Instead, they are restricted to a certain interval of distance from the animal, or from where the eyes are fixating (see Panum 's area).
The receptive field is often identified as the region of the retina where the action of light alters the firing of the neuron. In retinal ganglion cells (see below), this area of the retina would encompass all the photoreceptors, all the rods and cones from one eye that are connected to this particular ganglion cell via bipolar cells, horizontal cells, and amacrine cells. In binocular neurons in the visual cortex, it is necessary to specify the corresponding area in both retinas (one in each eye). Although these can be mapped separately in each retina by shutting one or the other eye, the full influence on the neuron 's firing is revealed only when both eyes are open.
Hubel and Wiesel advanced the theory that receptive fields of cells at one level of the visual system are formed from input by cells at a lower level of the visual system. In this way, small, simple receptive fields could be combined to form large, complex receptive fields. Later theorists elaborated this simple, hierarchical arrangement by allowing cells at one level of the visual system to be influenced by feedback from higher levels.
Receptive fields have been mapped for all levels of the visual system from photoreceptors, to retinal ganglion cells, to lateral geniculate nucleus cells, to visual cortex cells, to extrastriate cortical cells. Studies based on perception do not give the full picture of the understanding of visual phenomena, so the electrophysiological tools must be used, as the retina, after all, is an outgrowth of the brain.
Each ganglion cell or optic nerve fiber bears a receptive field, increasing with intensifying light. In the largest field, the light has to be more intense at the periphery of the field than at the center, showing that some synaptic pathways are more preferred than others.
The organization of ganglion cells ' receptive fields, composed of inputs from many rods and cones, provides a way of detecting contrast, and is used for detecting objects ' edges. Each receptive field is arranged into a central disk, the "center '', and a concentric ring, the "surround '', each region responding oppositely to light. For example, light in the centre might increase the firing of a particular ganglion cell, whereas light in the surround would decrease the firing of that cell.
There are two types of retinal ganglion cells: "on - center '' and "off - center ''. An on - center cell is stimulated when the center of its receptive field is exposed to light, and is inhibited when the surround is exposed to light. Off - center cells have just the opposite reaction. On the edge between the two, in mammals, an on - off effect (i.e., discharging at switching on or off but not at a duration of either state) is present. Stimulation of the center of an on - center cell 's receptive field produces depolarization and an increase in the firing of the ganglion cell, stimulation of the surround produces a hyperpolarization and a decrease in the firing of the cell, and stimulation of both the center and surround produces only a mild response (due to mutual inhibition of center and surround). An off - center cell is stimulated by activation of the surround and inhibited by stimulation of the center (see figure).
Photoreceptors that are part of the receptive fields of more than one ganglion cell are able to excite or inhibit postsynaptic neurons because they release the neurotransmitter glutamate at their synapses, which can act to depolarize or to hyperpolarize a cell, depending on whether there is a metabotropic or ionotropic receptor on that cell.
The center - surround receptive field organization allows ganglion cells to transmit information not merely about whether photoreceptor cells are exposed to light, but also about the differences in firing rates of cells in the center and surround. This allows them to transmit information about contrast. The size of the receptive field governs the spatial frequency of the information: small receptive fields are stimulated by high spatial frequencies, fine detail; large receptive fields are stimulated by low spatial frequencies, coarse detail. Retinal ganglion cell receptive fields convey information about discontinuities in the distribution of light falling on the retina; these often specify the edges of objects. In dark adaptation, the peripheral opposite activity zone becomes inactive, but, since it is a diminishing of inhibition between center and periphery, the active field can actually increase, allowing more area for summation.
The receptive field tends to favor movement (such as a light or dark spot moving over the field, as in center - to - periphery (or vice versa)), as well as contours (due to their nonuniformity in the receptive fields). The center of the visual field has as much diameter as its dendrite spread, so the periphery is founded by amacrine cells connecting a wide area of bipolars to the ganglion. These amacrine cells can also inhibit signals of the periphery from being transmitted to the ganglion, thus rendering it on - center, off - periphery. In the rabbit, one direction, the "preferred, '' of a moving patch of light will excite a ganglion cell, whereas the opposite ("null '') direction will not, also inhibiting spontaneous activity. Thus, there may be a linear nature of photoreceptors, one inhibiting its following neighbor when moving in the null direction, but arriving too late at the adjacent cell when traveling in the preferred direction.
Further along in the visual system, groups of ganglion cells form the receptive fields of cells in the lateral geniculate nucleus. Receptive fields are similar to those of ganglion cells, with an antagonistic center - surround system and cells that are either on - or off center.
Receptive fields of cells in the visual cortex are larger and have more - complex stimulus requirements than retinal ganglion cells or lateral geniculate nucleus cells. Hubel and Wiesel (e.g., Hubel, 1963; Hubel - Wiesel 1959) classified receptive fields of cells in the visual cortex into simple cells, complex cells, and hypercomplex cells. Simple cell receptive fields are elongated, for example with an excitatory central oval, and an inhibitory surrounding region, or approximately rectangular, with one long side being excitatory and the other being inhibitory. Images for these receptive fields need to have a particular orientation in order to excite the cell. For complex - cell receptive fields, a correctly oriented bar of light might need to move in a particular direction in order to excite the cell. For hypercomplex receptive fields, the bar might also need to be of a particular length.
In extrastriate visual areas, cells can have very large receptive fields requiring very complex images to excite the cell. For example, in the inferotemporal cortex, receptive fields cross the midline of visual space and require images such as radial gratings or hands. It is also believed that in the fusiform face area, images of faces excite the cortex more than other images. This property was one of the earliest major results obtained through fMRI (Kanwisher, McDermott and Chun, 1997); the finding was confirmed later at the neuronal level (Tsao, Freiwald, Tootell and Livingstone, 2006). In a similar vein, people have looked for other category - specific areas and found evidence for regions representing views of places (parahippocampal place area) and the body (Extrastriate body area). However, more recent research has suggested that the fusiform face area is specialised not just for faces, but also for any discrete, within - category discrimination.
Idealized models of visual receptive fields similar to those found in the retina, lateral geniculate nucleus (LGN) and the primary visual cortex of higher mammals can be derived in an axiomatic way from structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. Specifically, functional models for linear receptive fields can be derived in a principled manner to constitute a combination of Gaussian derivatives over the spatial domain and either non-causal Gaussian derivatives or truly time - causal temporal scale - space kernels over the temporal domain. Such receptive fields can be shown to enable computation of invariant visual representations under natural image transformations. By these results, the different shapes of receptive field profiles found in biological vision, which are tuned to different sizes and orientations in the image domain as well as to different image velocities in space - time, can be seen as well adapted to structure of the physical world and be explained from the requirement that the visual system should be invariant to the natural types of image transformations that occur in its environment.
A computational theory for auditory receptive fields can be expressed in a structurally similar way, permitting the derivation of auditory receptive fields in two stages:
Interestingly, the shapes of the receptive field functions in these models can be determined by necessity from structural properties of the environment combined with requirements about the internal structure of the auditory system to enable theoretically well - founded processing of sound signals at different temporal and log - spectral scales.
The term receptive field is also used in the context of artificial neural networks, most often in relation to convolutional neural networks (CNNs). When used in this sense, the term adopts a meaning reminiscent of receptive fields in actual biological nervous systems. CNNs have a distinct architecture, designed to mimic the way in which real animal brains are understood to function; instead of having every neuron in each layer connect to all neurons in the next layer (Multilayer perceptron), the neurons are arranged in a 3 - dimensional structure in such a way as to take into account the spatial relationships between different neurons with respect to the original data. Since CNNs are used primarily in the field of computer vision, the data that the neurons represent is typically an image; each input neuron represents one pixel from the original image. The first layer of neurons is composed of all the input neurons; neurons in the next layer will receive connections from some of the input neurons (pixels), but not all, as would be the case in a MLP and in other traditional neural networks. Hence, instead of having each neuron receive connections from all neurons in the previous layer, CNNs use a receptive field - like layout in which each neuron receives connections only from a subset of neurons in the previous (lower) layer. The receptive field of a neuron in one of the lower layers encompasses only a small area of the image, while the receptive field of a neuron in subsequent (higher) layers involves a combination of receptive fields from several (but not all) neurons in the layer before (i.e. a neuron in a higher layer "looks '' at a larger portion of the image than does a neuron in a lower layer). In this way, each successive layer is capable of learning increasingly abstract features of the original image. The use of receptive fields in this fashion is thought to give CNNs an advantage in recognizing visual patterns when compared to other types of neural networks.
|
where does the bristol channel start and finish | Bristol Channel - wikipedia
The Bristol Channel (Welsh: Môr Hafren) is a major inlet in the island of Great Britain, separating South Wales from Devon and Somerset in South West England. It extends from the lower estuary of the River Severn (Welsh: Afon Hafren) to the North Atlantic Ocean. It takes its name from the English city of Bristol, and is over 30 miles (50 km) across at its widest point.
Long stretches of the coastline of the Bristol Channel, on both the South Wales and West Country sides, are designated as Heritage Coast, including Exmoor, Bideford Bay, the Hartland Point peninsula, Lundy Island, Glamorgan, Gower Peninsula, South Pembrokeshire and Caldey Island.
Until Tudor times the Bristol Channel was known as the Severn Sea, and it is still known as this in both Welsh: Môr Hafren and Cornish: Mor Havren.
The International Hydrographic Organisation now defines the western limit of the Bristol Channel as "a line joining Hartland Point in Devon (51 ° 01 ′ N 4 ° 32 ′ W / 51.017 ° N 4.533 ° W / 51.017; - 4.533) to St. Govan 's Head in Pembrokeshire (51 ° 36 ′ N 4 ° 55 ′ W / 51.600 ° N 4.917 ° W / 51.600; - 4.917) ''. The IHO previously put the western limit at a line from Trevose Head in Cornwall to Skomer Island in Pembrokeshire, in an area now considered part of the Celtic Sea.
The upper limit of the Channel is between Sand Point, Somerset (immediately north of Weston - super-Mare) and Lavernock Point (immediately south of Penarth in South Wales). East of this line is the Severn Estuary. Western and northern Pembrokeshire, and north Cornwall are outside the defined limits of the Bristol Channel, and are considered part of the seaboard of the Atlantic Ocean, more specifically the Celtic Sea.
Within its officially defined limits, the Bristol Channel extends for some 75 miles (121 km) from west to east, but taken as a single entity the Bristol Channel - Severn Estuary system extends eastward to the limit of tidal influence near Gloucester. The channel shoreline alternates between resistant and erosional cliff features, interspersed with depositional beaches backed by coastal sand dunes; in the Severn Estuary, a low - lying shoreline is fronted by extensive intertidal mudflats. The Severn Estuary and most of the embayments around the channel are less than 10 m in depth. Within the channel, however, there is an E-W trending valley 20 to 30 m in depth that is considered to have been formed by fluvial run - off during Pleistocene phases of lower sea level. Along the margins of the Bristol Channel are extensive linear tidal sandbanks that are actively dredged as a source of aggregates and in the Outer Bristol Channel off the Welsh coast are the OBel Sands, an extensive area of sand waves up to 19 m high, covering an area of over 1,000 km2.
The Bristol Channel is an important area for wildlife, in particular waders, and has protected areas, including national nature reserves such as Bridgwater Bay at the mouth of the River Parrett. At low tide large parts of the channel become mud flats due to the tidal range of 43 feet (13 m), second only to the Bay of Fundy in Eastern Canada. Development schemes have been proposed along the channel, including an airport and a tidal barrier for electricity generation, but conservation issues have so far managed to block such schemes.
The largest islands in the Bristol Channel are Lundy, Steep Holm and Flat Holm. The islands and headlands provide some shelter for the upper reaches of the channel from storms. These islands are mostly uninhabited and protected as nature reserves, and are home to some unique wild flower species. In 1971 a proposal was made by the Lundy Field Society to establish a marine reserve. Provision for the establishment of statutory Marine Nature Reserves was included in the Wildlife and Countryside Act 1981, and on 21 November 1986 the Secretary of State for the Environment announced the designation of a statutory reserve at Lundy. There is an outstanding variety of marine habitats and wildlife, and a large number of rare and unusual species in the waters around Lundy, including some species of seaweed, branching sponges, sea fans and cup corals.
The Bristol Channel has some extensive and popular beaches and spectacular scenery, particularly on the coasts of Exmoor and Bideford Bay in North Devon and the Vale of Glamorgan and Gower Peninsula on the Glamorgan coast. The western stretch of Exmoor boasts Hangman cliffs, the highest cliffs in mainland Britain, culminating near Combe Martin in the "Great Hangman '', a 1,043 ft (318 m) ' hog - backed ' hill with a cliff - face of 820 ft (250 m); its sister cliff the "Little Hangman '' has a cliff - face of 716 ft (218 m). On the Gower Peninsula, at its western extremity is the Worms Head, a headland of carboniferous limestone which is approachable on foot at low tide only. The beaches of Gower (at Rhossili, for example) and North Devon, such as Croyde and Woolacombe, win awards for their water quality and setting, as well as being renowned for surfing. In 2004, The Times "Travel '' magazine selected Barafundle Bay in Pembrokeshire as one of the twelve best beaches in the world. In 2007, Oxwich Bay made the same magazine 's Top 12 best beaches in the world list, and was also selected as Britain 's best beach for 2007.
The city of Swansea is the largest settlement on the Welsh coast of the Bristol Channel. Other major built - up areas include Barry (including Barry Island), Port Talbot and Llanelli. Smaller resort towns include Porthcawl, Mumbles, Saundersfoot and Tenby. The cities of Cardiff and Newport adjoin the Severn estuary, but lie upstream of the Bristol Channel itself.
On the English side, the resort towns of Weston - super-Mare, Burnham - on - Sea, Watchet, Minehead and Ilfracombe are located on the Bristol Channel. Barnstaple and Bideford are sited on estuaries opening onto Bideford Bay, at the westernmost end of the Bristol Channel. Just upstream of the official eastern limit of the Channel, adjoining the Severn estuary, is the city of Bristol, originally established on the River Avon but now with docks on the Severn estuary, which is one of the most important ports in Britain. It gives its name to the Channel, which forms its seaward approach.
There are no road or rail crossings of the Bristol Channel so direct crossings are necessarily made by sea or air, or less directly by the road and rail crossings of the Severn estuary. The Channel can be a hazardous area of water because of its strong tides and the rarity of havens on the north Devon and Somerset coasts that can be entered in all states of the tide. Because of the treacherous waters, pilotage is an essential service for shipping. A specialised style of sailing boat, the Bristol Channel Pilot Cutter, developed in the area.
P and A Campbell of Bristol were the main operators of pleasure craft, particularly paddle steamers, from the mid-19th century to the late 1970s, together with the Barry Railway Company. These served harbours along both coasts, such as Ilfracombe and Weston - super-Mare.
This tradition is continued each summer by the PS Waverley, the last seagoing paddle steamer in the world, built in 1947. The steamer provides pleasure trips between the Welsh and English coasts and to the islands of the channel. Trips are also offered on the MV Balmoral, also owned by Waverley Excursions.
The Burnham - on - Sea Area Rescue Boat (BARB) (2) uses a hovercraft to rescue people from the treacherous mud flats on that part of the coast. A hovercraft was recently tested to determine the feasibility of setting up a similar rescue service in Weston - super-Mare. There are also RNLI lifeboats stationed along both sides of the Channel. In the Severn Estuary, in - shore rescue is provided by two independent lifeboat trusts, the Severn Area Rescue Association (SARA) and the Portishead and Bristol Lifeboat Trust.
The Bristol Channel and Severn Estuary have the potential to generate more renewable electricity than all other UK estuaries. It has been stated that, if harnessed, it could create up to 5 % of the UK 's electricity, contributing significantly to UK climate change goals and European Union renewable energy targets. Earlier studies of a possible Severn Barrage included estimates of bed load transport of sand and gravel by tidal ebb and flood that would be interrupted if a solid dam were built across the Channel. More recently, the Severn Tidal Power Feasibility Study was launched in 2008 by the British Government to assess all tidal range technologies, including barrages, lagoons and others. The study will look at the costs, benefits and impacts of a Severn tidal power scheme and will help Government decide whether it could or could not support such a scheme. Some of the options being looked at may include a road crossing downstream of the existing crossings of the estuary.
On 30 January 1607 (New style) thousands of people were drowned, houses and villages swept away, farmland inundated and flocks destroyed when a flood hit the shores of the Channel. The devastation was particularly bad on the Welsh side, from Laugharne in Carmarthenshire to above Chepstow on the English border. Cardiff was the most badly affected town. There remain plaques up to 8 ft (2.4 m) above sea level to show how high the waters rose on the sides of the surviving churches. It was commemorated in a contemporary pamphlet "God 's warning to the people of England by the great overflowing of the waters or floods. ''
The cause of the flood is uncertain and disputed. It had long been believed that the floods were caused by a combination of meteorological extremes and tidal peaks, but research published in 2002 showed some evidence of a tsunami in the Channel. Although some evidence from the time describes events similar to a tsunami, there are also similarities to descriptions of the 1953 floods in East Anglia, which were caused by a storm surge. It has been shown that the tide and weather at the time were capable of generating such a surge.
In 1835 John Ashley was on the shore at Clevedon with his son who asked him how the people on Flat Holm could go to church. For the next three months Ashley voluntarily ministered to the population of the island. From there he recognised the needs of the seafarers on the four hundred sailing vessels in the Bristol Channel and created the Bristol Channel Mission. He raised funds and in 1839 a specially designed mission cutter was built with a main cabin which could be converted into a chapel for 100 people, this later became first initiative of the Mission to Seafarers.
Much of the coastline at the western end of the Bristol Channel faces west towards the Atlantic Ocean meaning that a combination of an off - shore (east) wind and a generous Atlantic swell produces excellent surf along the beaches. The heritage coasts of the Vale of Glamorgan, Bideford Bay and Gower are, along with the Atlantic coasts of Pembrokeshire and Cornwall, the key areas for surfing in the whole of Britain. Although slightly overshadowed by the Atlantic coasts of North Cornwall and West Pembrokeshire, both Gower and Bideford Bay nevertheless have several superb breaks -- notably Croyde in Bideford Bay and Langland Bay on Gower -- and surfing in Gower and Bideford Bay is enhanced by the golden beaches, clean blue waters, excellent water quality and good facilities close by to the main surf breaks.
The first known crossing of the Bristol Channel (from Swansea to Woody Bay, near Lynton, Devon) by a windsurfer was Adam Cowles in April 2006, apparently accidentally. Other windsurfers have reported making the crossing as a training exercise (Hugh Sim Williams) or as part of a windsurf around Britain (e.g. Jono Dunnett). The coastguard has stated that windsurf crossings of the channel are dangerous and should not be attempted without appropriate preparations.
The high quality of the landscape of much of both coasts of the Bristol Channel means that they are popular destinations for walkers. Sections of two national trails; the South West Coast Path and the Pembrokeshire Coast Path follow these shores, and the West Somerset Coast Path extends eastwards from the South West Coast Path to the mouth of the River Parrett. A continuous coastal path, the Wales Coast Path, was opened in May 2012 along the entire Welsh shore under the auspices of the Countryside Council for Wales.
The first person to swim across the Bristol Channel was Kathleen Thomas, a 21 - year - old woman from Penarth who swam to Weston - super-Mare on 5 September 1927. She completed the swim, nominally 11 miles but equivalent to 22 miles because of tidal flows, in 7 hours 20 minutes. In 2007 the achievement was marked by a plaque in seafront in Penarth.
In 1929 Edith Parnell, a 16 - year - old schoolgirl, emulated Kathleen Thomas 's swim from Penarth to Weston - super-Mare in 10 hours 15 minutes. Edith later became the first wife of Hugh Cudlipp the Welsh journalist and newspaper editor.
The first person to swim the 30.5 nautical miles (56.5 km; 35.1 mi) from Ilfracombe to Swansea was Gethin Jones, who achieved the record on 13 September 2009, taking nearly 22 hours.
The youngest person to swim the Bristol Channel from Penarth to Clevedon is Gary Carpenter who at age 17 on August bank holiday 2007, swam the channel in 5 hours 35 minutes making him the youngest and fastest swimmer of the Bristol Channel. Gary Carpenter 's coach Steve Price was the first ever person to swim from Penarth to Clevedon back in 1990.
Coordinates: 51 ° 18 ′ N 3 ° 37 ′ W / 51.300 ° N 3.617 ° W / 51.300; - 3.617
|
when was the international covenant on civil and political rights signed | International Covenant on Civil and Political Rights - Wikipedia
The International Covenant on Civil and Political Rights (ICCPR) is a multilateral treaty adopted by the United Nations General Assembly with resolution 2200A (XXI) on 19 December 1966, and in force from 23 March 1976 in accordance with Article 49 of the covenant. Article 49 allowed that the covenant will enter into force three months after the date of the deposit of the thirty - fifth instrument of ratification or accession. The covenant commits its parties to respect the civil and political rights of individuals, including the right to life, freedom of religion, freedom of speech, freedom of assembly, electoral rights and rights to due process and a fair trial. As of February 2017, the Covenant has 170 parties and six more signatories without ratification.
The ICCPR is part of the International Bill of Human Rights, along with the International Covenant on Economic, Social and Cultural Rights (ICESCR) and the Universal Declaration of Human Rights (UDHR).
The ICCPR is monitored by the United Nations Human Rights Committee (a separate body to the United Nations Human Rights Council), which reviews regular reports of States parties on how the rights are being implemented. States must report initially one year after acceding to the Covenant and then whenever the Committee requests (usually every four years). The Committee normally meets in Geneva and normally holds three sessions per year.
The ICCPR has its roots in the same process that led to the Universal Declaration of Human Rights. A "Declaration on the Essential Rights of Man '' had been proposed at the 1945 San Francisco Conference which led to the founding of the United Nations, and the Economic and Social Council was given the task of drafting it. Early on in the process, the document was split into a declaration setting forth general principles of human rights, and a convention or covenant containing binding commitments. The former evolved into the UDHR and was adopted on 10 December 1948.
The States Parties to the present Covenant, including those having responsibility for the administration of Non-Self - Governing and Trust Territories, shall promote the realization of the right of self - determination, and shall respect that right, in conformity with the provisions of the Charter of the United Nations.
Drafting continued on the convention, but there remained significant differences between UN members on the relative importance of negative Civil and Political versus positive Economic, Social and Cultural rights. These eventually caused the convention to be split into two separate covenants, "one to contain civil and political rights and the other to contain economic, social and cultural rights. '' The two covenants were to contain as many similar provisions as possible, and be opened for signature simultaneously. Each would also contain an article on the right of all peoples to self - determination.
The first document became the International Covenant on Civil and Political Rights and the second the International Covenant on Economic, Social and Cultural Rights. The drafts were presented to the UN General Assembly for discussion in 1954, and adopted in 1966. As a result of diplomatic negotiations the International Covenant on Economic, Social and Cultural Rights was adopted shortly before the International Covenant on Civil and Political Rights. Together, the UDHR and the two Covenants are considered to be the foundational human rights texts in the contemporary international system of human rights.
The Covenant follows the structure of the UDHR and ICESCR, with a preamble and fifty - three articles, divided into six parts.
Part 1 (Article 1) recognizes the right of all peoples to self - determination, including the right to "freely determine their political status '', pursue their economic, social and cultural goals, and manage and dispose of their own resources. It recognises a negative right of a people not to be deprived of its means of subsistence, and imposes an obligation on those parties still responsible for non-self governing and trust territories (colonies) to encourage and respect their self - determination.
Part 2 (Articles 2 -- 5) obliges parties to legislate where necessary to give effect to the rights recognised in the Covenant, and to provide an effective legal remedy for any violation of those rights. It also requires the rights be recognised "without distinction of any kind, such as race, colour, sex, language, religion, political or other opinion, national or social origin, property, birth or other status, '' and to ensure that they are enjoyed equally by women. The rights can only be limited "in time of public emergency which threatens the life of the nation, '' and even then no derogation is permitted from the rights to life, freedom from torture and slavery, the freedom from retrospective law, the right to personhood, and freedom of thought, conscience and religion.
Part 3 (Articles 6 -- 27) lists the rights themselves. These include rights to:
Many of these rights include specific actions which must be undertaken to realise them.
Part 4 (Articles 28 -- 45) governs the establishment and operation of the Human Rights Committee and the reporting and monitoring of the Covenant. It also allows parties to recognise the competence of the Committee to resolve disputes between parties on the implementation of the Covenant (Articles 41 and 42).
Part 5 (Articles 46 -- 47) clarifies that the Covenant shall not be interpreted as interfering with the operation of the United Nations or "the inherent right of all peoples to enjoy and utilize fully and freely their natural wealth and resources ''.
Part 6 (Articles 48 -- 53) governs ratification, entry into force, and amendment of the Covenant.
Article 6 of the Covenant recognises the individual 's "inherent right to life '' and requires it to be protected by law. It is a "supreme right '' from which no derogation can be permitted, and must be interpreted widely. It therefore requires parties to take positive measures to reduce infant mortality and increase life expectancy, as well as forbidding arbitrary killings by security forces.
While Article 6 does not prohibit the death penalty, it restricts its application to the "most serious crimes '' and forbids it to be used on children and pregnant women or in a manner contrary to the Convention on the Prevention and Punishment of the Crime of Genocide. The UN Human Rights Committee interprets the Article as "strongly suggest (ing) that abolition is desirable '', and regards any progress towards abolition of the death penalty as advancing this right. The Second Optional Protocol commits its signatories to the abolition of the death penalty within their borders.
Article 7 prohibits torture and cruel, inhuman or degrading punishment. As with Article 6, it can not be derogated from under any circumstances. The article is now interpreted to impose similar obligations to those required by the United Nations Convention Against Torture, including not just prohibition of torture, but active measures to prevent its use and a prohibition on refoulement. In response to Nazi human experimentation during WW2 this article explicitly includes a prohibition on medical and scientific experimentation without consent.
Article 8 prohibits slavery and enforced servitude in all situations. The article also prohibits forced labour, with exceptions for criminal punishment, military service and civil obligations.
Article 9 recognises the rights to liberty and security of the person. It prohibits arbitrary arrest and detention, requires any deprivation of liberty to be according to law, and obliges parties to allow those deprived of their liberty to challenge their imprisonment through the courts. These provisions apply not just to those imprisoned as part of the criminal process, but also to those detained due to mental illness, drug addiction, or for educational or immigration purposes.
Articles 9.3 and 9.4 impose procedural safeguards around arrest, requiring anyone arrested to be promptly informed of the charges against them, and to be brought promptly before a judge. It also restricts the use of pre-trial detention, requiring that it not be ' the general rule '.
Article 10 requires anyone deprived of liberty to be treated with dignity and humanity. This applies not just to prisoners, but also to those detained for immigration purposes or psychiatric care. The right complements the Article 7 prohibition on torture and cruel, inhuman or degrading treatment. The article also imposes specific obligations around criminal justice, requiring prisoners in pretrial detention to be separated from convicted prisoners, and children to be separated from adults. It requires prisons to be focused on reform and rehabilitation rather than punishment.
Article 11 prohibits the use of imprisonment as a punishment for breach of contract.
Article 14 recognizes and protects a right to justice and a fair trial. Article 14.1 establishes the ground rules: everyone must be equal before the courts, and any hearing must take place in open court before a competent, independent and impartial tribunal, with any judgment or ruling made public. Closed hearings are only permitted for reasons of privacy, justice, or national security, and judgments may only be suppressed in divorce cases or to protect the interests of children. These obligations apply to both criminal and civil hearings, and to all courts and tribunals.
The rest of the article imposes specific and detailed obligations around the process of criminal trials in order to protect the rights of the accused and the right to a fair trial. It establishes the Presumption of innocence and forbids double jeopardy. It requires that those convicted of a crime be allowed to appeal to a higher tribunal, and requires victims of a Miscarriage of justice to be compensated. It establishes rights to a speedy trial, to counsel, against self - incrimination, and for the accused to be present and call and examine witnesses.
Article 15 prohibits prosecutions under Ex post facto law and the imposition of retrospective criminal penalties, and requires the imposition of the lesser penalty where criminal sentences have changed between the offence and conviction. But except the criminal according to general principles of law recognized by international community. (jus cogens)
Article 16 requires states to recognize everyone as a person before the law.
Article 12 guarantees freedom of movement, including the right of persons to choose their residence, to leave and return to a country. These rights apply to legal aliens as well as citizens of a state, and can be restricted only where necessary to protect national security, public order or health, and the rights and freedoms of others. The article also recognises a right of people to enter their own country; the Right of return. The Human Rights Committee interprets this right broadly as applying not just to citizens, but also to those stripped of or denied their nationality. They also regard it as near - absolute; "there are few, if any, circumstances in which deprivation of the right to enter one 's own country could be reasonable ''.
Article 13 forbids the arbitrary expulsion of resident aliens and requires such decisions to be able to be appealed and reviewed.
Article 17 mandates the right of privacy. This provision, specifically article 17 (1), protects private adult consensual sexual activity, thereby nullifying prohibitions on homosexual behaviour, however, the wording of this covenant 's marriage right (Article 23) excludes the extrapolation of a same - sex marriage right from this provision. Article 17 also protects people against unlawful attacks to their honor and reputation. Article 17 (2) grants the protection of the Law against such attacks
Article 18 mandates freedom of religion or belief.
Article 19 mandates freedom of expression.
Article 20 mandates sanctions against inciting hatred.
Articles 21 and 22 mandate freedom of association. These provisions guarantee the right to freedom of association, the right to trade unions and also defines the International Labour Organisation.
Article 23 mandates the right of marriage. The wording of this provision neither requires nor prohibits same - sex marriage.
Article 24 mandates special protection, the right to a name, and the right to a nationality for every child.
Article 27 mandates the rights of ethnic, religious and linguistic minority to enjoy their own culture, to profess their own religion, and to use their own language.
Article 3 provides an accessory non-discrimination principle. Accessory in the way that it can not be used independently and can only be relied upon in relation to another right protected by the ICCPR.
In contrast, Article 26 contains a revolutionary norm by providing an autonomous equality principle which is not dependent upon another right under the convention being infringed. This has the effect of widening the scope of the non-discrimination principle beyond the scope of ICCPR.
There are two Optional Protocols to the Covenant. The First Optional Protocol establishes an individual complaints mechanism, allowing individuals to complain to the Human Rights Committee about violations of the Covenant. This has led to the creation of a complex jurisprudence on the interpretation and implementation of the Covenant. As of July 2013, the First Optional Protocol has 116 parties.
The Second Optional Protocol abolishes the death penalty; however, countries were permitted to make a reservation allowing for use of death penalty for the most serious crimes of a military nature, committed during wartime. As of December 2017, the Second Optional Protocol had 85 parties.
A number of parties have made reservations and interpretative declarations to their application of the Covenant.
Argentina will apply the fair trial rights guaranteed in its constitution to the prosecution of those accused of violating the general law of nations.
Australia reserves the right to progressively implement the prison standards of Article 10, to compensate for miscarriages of justice by administrative means rather than through the courts, and interprets the prohibition on racial incitement as being subject to the freedoms of expression, association and assembly. It also declares that its implementation will be effected at each level of its federal system.
Austria reserves the right to continue to exile members of the House of Habsburg, and limits the rights of the accused and the right to a fair trial to those already existing in its legal system.
Bahamas, due to problems with implementation, reserves the right not to compensate for miscarriages of justice.
Bahrain interprets Articles 3 (no sexual discrimination), 18 (freedom of religion) and 23 (family rights) within the context of Islamic Sharia law.
Bangladesh reserves the right to try people in absentia where they are fugitives from justice and declares that resource constraints mean that it can not necessarily segregate prisons or provide counsel for accused persons.
Barbados reserves the right not to provide free counsel for accused persons due to resource constraints.
Belgium interprets the freedoms of speech, assembly and association in a manner consistent with the European Convention on Human Rights. It does not consider itself obliged to ban war propaganda as required by Article 20, and interprets that article in light of the freedom of expression in the UDHR.
Belize reserves the right not to compensate for miscarriages of justice, due to problems with implementation, and does not plan to provide free legal counsel for the same reasons as above. It also refuses to ensure the right to free travel at any time, due to a law requiring those travelling abroad to provide tax clearance certificates.
Congo, as per the Congolese Code of Civil, Commercial, Administrative and Financial Procedure, in matters of private law, decisions or orders emanating from conciliation proceedings may be enforced through imprisonment for debt.
Denmark reserves the right to exclude the press and the public from trials as per its own laws. Reservation is further made to Article 20, paragraph 1. This reservation is in accordance with the vote cast by Denmark in the XVI General Assembly of the United Nations in 1961 when the Danish Delegation, referring to the preceding article concerning freedom of expression, voted against the prohibition against propaganda for war.
The Gambia, as per its constitution, will provide free legal assistance for accused persons charged with capital offences only.
Pakistan, has made several reservations to the articles in the Convention; "the provisions of Articles 3, 6, 7, 18 and 19 shall be so applied to the extent that they are not repugnant to the Provisions of the Constitution of Pakistan and the Sharia laws '', "the provisions of Article 12 shall be so applied as to be in conformity with the Provisions of the Constitution of Pakistan '', "With respect to Article 13, the Government of the Islamic Republic of Pakistan reserves its right to apply its law relating to foreigners '', "the provisions of Article 25 shall be so applied to the extent that they are not repugnant to the Provisions of the Constitution of Pakistan '' and the Government of the Islamic Republic of Pakistan "does not recognize the competence of the Committee provided for in Article 40 of the Covenant ''.
The United States has made reservations that none of the articles should restrict the right of free speech and association; that the US government may impose capital punishment on any person other than a pregnant woman, including persons below the age of 18; that "cruel, inhuman and degrading treatment or punishment '' refers to those treatments or punishments prohibited by one or more of the fifth, eighth, and fourteenth amendments to the US Constitution; that Paragraph 1, Article 15 will not apply; and that, notwithstanding paragraphs 2 (b) and 3 of Article 10 and paragraph 4 of Article 14, the US government may treat juveniles as adults, and accept volunteers to the military prior to the age of 18. The United States also submitted five "understandings '', and four "declarations ''.
The International Covenant on Civil and Political Rights has 167 states parties, 67 by signature and ratification, and the remainder by accession or succession. Another five states have signed but have yet to ratify the treaty.
The Covenant is not directly enforceable in Australia, but its provisions support a number of domestic laws, which confer enforceable rights on individuals. For example, Article 17 of the Convention has been implemented by the Australian Privacy Act 1988. Likewise, the Covenant 's equality and anti-discrimination provisions support the federal Disability Discrimination Act 1992. Finally, the Covenant is one of the major sources of ' human rights ' listed in the Human Rights (Parliamentary Scrutiny) Act 2011. This law requires most new legislation and administrative instruments (such as delegated / subordinate legislation) to be tabled in parliament with a statement outlining the proposed law 's compatibility with the listed human rights A Joint Committee on Human Rights scrutinises all new legislation and statements of compatibility. The findings of the Joint Committee are not legally binding.
Legislation also establishes the Australian Human Rights Commission which allows the Australian Human Rights Commission (AHRC) to examine enacted legislation (to suggest remedial enactments), its administration (to suggest avoidance of practices) and general compliance with the covenant which is schedule to the AHRC legislation.
In Victoria and the Australian Capital Territory, the Convention can be used by a plaintiff or defendant who invokes those jurisdiction 's human rights charters. While the Convention can not be used to overturn a Victorian or ACT law, a Court can issue a ' declaration of incompatibility ' that requires the relevant Attorney - General to respond in Parliament within a set time period. Courts in Victoria and the ACT are also directed by the legislation to interpret the law in a way to give effect to a human right, and new legislation and subordinate legislation must be accompanied by a statement of compatibility. Efforts to implement a similar Charter at the national level have been frustrated and Australia 's Constitution may prevent conferring the ' declaration ' power on federal judges.
Ireland 's use of Special Criminal Courts where juries are replaced by judges and other special procedures apply has been found to not violate the treaty: "In the Committee 's view, trial before courts other than the ordinary courts is not necessarily, per se, a violation of the entitlement to a fair hearing and the facts of the present case do not show that there has been such a violation. ''
New Zealand took measures to give effect to many of the rights contained within it by passing the New Zealand Bill of Rights Act in 1990, and formally incorporated the status of protected person into law through the passing of the Immigration Act 2009.
The United States Senate ratified the ICCPR in 1992, with five reservations, five understandings, and four declarations. Some have noted that with so many reservations, its implementation has little domestic effect. Included in the Senate 's ratification was the declaration that "the provisions of Article 1 through 27 of the Covenant are not self - executing '', and in a Senate Executive Report stated that the declaration was meant to "clarify that the Covenant will not create a private cause of action in U.S. Courts. '' However, "expressed declarations '' do not affect treaties (Igartua - De Le Rosa v. US, 417 F. 3d 145, 190 - 191 (1st Cir. 2005)) Fleming v US (15 - 8425) establishes the ICCPR treaty IS SELF - Executing by legal definition of a self - executing treaty, US reports to UN, DOJ and US Ambassador Hamamoto.
Where a treaty or covenant is not self - executing, and where Congress has not acted to implement the agreement with legislation, no private right of action within the US judicial system is created by ratification. However, the US Federal Government has held that the ICCPR treaty was only ratified "after '' it was determined that all the necessary legislation was in place to provide for domestic effect of law, thereby making the ICCPR treaty self - executing by definition. See all four reports by US to UN regarding the ICCPR treaty. It is also important to emphasize that the "self - executing '' statement was a declaration and the Courts have held that declarations have no effect upon treaty law and the rights of citizens.
As a reservation that is "incompatible with the object and purpose '' of a treaty is void as a matter of the Vienna Convention on the Law of Treaties and international law, there is some issue as to whether the non-self - execution declaration is even legal under domestic law.
Prominent critics in the human rights community, such as Prof. Louis Henkin (non-self - execution declaration incompatible with the Supremacy Clause) and Prof. Jordan Paust ("Rarely has a treaty been so abused '') have denounced the United States ' ratification subject to the non-self - execution declaration as a blatant fraud upon the international community, especially in light of its subsequent failure to conform domestic law to the minimum human rights standards as established in the Covenant and in the Universal Declaration of Human Rights over the last thirty years.
It has been argued that Article 20 (2) of the ICCPR, as well as Article 4 of the ICERD, may be unconstitutional according to Supreme Court precedent, which is the reason behind the Senate reservations.
In 1994, the United Nations ' Human Rights Committee expressed concerns with compliance:
Of particular concern are widely formulated reservations which essentially render ineffective all Covenant rights which would require any change in national law to ensure compliance with Covenant obligations. No real international rights or obligations have thus been accepted. And when there is an absence of provisions to ensure that Covenant rights may be sued on in domestic courts, and, further, a failure to allow individual complaints to be brought to the Committee under the first Optional Protocol, all the essential elements of the Covenant guarantees have been removed.
Indeed, the United States has not accepted a single international obligation required under the Covenant. It has not changed its domestic law to conform with the strictures of the Covenant. Its citizens are not permitted to sue to enforce their basic human rights under the Covenant. It has not ratified the Optional Protocol to the Convention against Torture (OPCAT). As such, the Covenant has been rendered ineffective, with the bone of contention being United States officials ' insistence upon preserving a vast web of sovereign, judicial, prosecutorial, and executive branch immunities that often deprives its citizens of the "effective remedy '' under law the Covenant is intended to guarantee.
In 2006, the Human Rights Committee expressed concern over what it interprets as material non-compliance, exhorting the United States to take immediate corrective action:
The Committee notes with concern the restrictive interpretation made by the State party of its obligations under the Covenant, as a result in particular of (a) its position that the Covenant does not apply with respect to individuals under its jurisdiction but outside its territory, nor in time of war, despite the contrary opinions and established jurisprudence of the Committee and the International Court of Justice; (b) its failure to take fully into consideration its obligation under the Covenant not only to respect, but also to ensure the rights prescribed by the Covenant; and (c) its restrictive approach to some substantive provisions of the Covenant, which is not in conformity with the interpretation made by the Committee before and after the State party 's ratification of the Covenant.
The State party should review its approach and interpret the Covenant in good faith, in accordance with the ordinary meaning to be given to its terms in their context, including subsequent practice, and in the light of its object and purpose. The State party should in particular (a) acknowledge the applicability of the Covenant with respect to individuals under its jurisdiction but outside its territory, as well as its applicability in time of war; (b) take positive steps, when necessary, to ensure the full implementation of all rights prescribed by the Covenant; and (c) consider in good faith the interpretation of the Covenant provided by the Committee pursuant to its mandate.
As of February 2013, the United States is among States scheduled for examination in the 107th (11 -- 28 March 2013) and 109th (14 October -- 1 November 2013) sessions of the Committee.
There are a total of 170 parties to the International Covenant on Civil and Political Rights.
Most states in the world are parties to the ICCPR. The following 27 states have not become party to it, but six states have signed the Covenant but not ratified it.
|
where was the upper room located in jerusalem | Cenacle - wikipedia
Coordinates: 31 ° 46 ′ 18 '' N 35 ° 13 ′ 44 '' E / 31.7718 ° N 35.229 ° E / 31.7718; 35.229
The Cenacle (from Latin cēnāculum "dining room '', later spelt coenaculum and semantically drifting towards "upper room ''), also known as the "Upper Room '', is a room in the David 's Tomb Compound in Jerusalem, traditionally held to be the site of the Last Supper.
The word is a derivative of the Latin word cēnō, which means "I dine ''. The Gospel of Mark employs the Ancient Greek: ἀνάγαιον, anagaion, (Mark 14: 15), whereas the Acts of the Apostles uses Ancient Greek: ὑπερῷον, hyperōion (Acts 1: 13), both with the meaning "upper room ''. The language in Acts suggests that the apostles used the Upper Room as a temporary residence (Ancient Greek: οὗ ἦσαν καταμένοντες, hou ēsan katamenontes), although the Jamieson - Fausset - Brown Bible Commentary disagrees, preferring to see the room as a place where they were "not lodged, but had for their place of rendezvous ''.
Jerome used the Latin coenaculum for both Greek words in his Latin Vulgate translation. In Christian tradition, the "Upper Room '' was not only the site of the Last Supper (i.e. the Cenacle), but the room in which the Holy Spirit alighted upon the eleven apostles after Easter. It is sometimes thought to be the place where the apostles stayed in Jerusalem and, according to the Catholic Encyclopedia, it was "the first Christian church ''.
The Cenacle is considered the site where many other events described in the New Testament took place, such as:
Pilgrims to Jerusalem report visiting a structure on Mount Zion commemorating the Last Supper since the fourth century CE. Some scholars would have it that this was the Cenacle, in fact a synagogue from an earlier time. The anonymous pilgrim from Bordeaux, France reported seeing such a synagogue in 333 CE. A Christian synagogue is mentioned in the apocryphal fourth - century Anaphora Pilati ("Report of Pilate ''). But a Jewish origin for the building has come under serious question for which see below. The building has experienced numerous cycles of destruction and reconstruction, culminating in the Gothic structure which stands today.
While the term "Cenacle '' refers only to the Upper Room, the building contains another site of interest. A niche located on the lower level of the same building is associated by tradition with the burial site of King David, marked by a large cenotaph - sarcophagus first reported seen by 12th - century Crusaders but earlier mentioned in the 10th - century Vita Constantini. Most accept the notice in 1 Kings 2: 10 that says David was buried "in the City of David '', identified as the eastern hill of ancient Jerusalem, as opposed to what is today called Mount Sion, which is the western hill of the ancient city. The general location of the Cenacle is also associated with that of the house where the Virgin Mary lived among the Apostles until her death or dormition, an event celebrated in the nearby Church of the Dormition.
The early history of the Cenacle site is uncertain; scholars have made attempts at establishing a chronology based on archaeological and artistic evidence as well as historical sources.
Bargil Pixner, for example, following the survey conducted by Jacob Pinkerfeld in 1948, believed that the original building was a synagogue later probably used by Jewish Christians. However, no architectural features associated with early synagogues such as columns, benches, or other accoutrements are present in the lower Tomb chamber. According to Epiphanius, bishop of Salamis writing towards the end of the 4th century, the building and its environs were spared during the destruction of Jerusalem under Titus (AD 70). Pixner suggests that the area on Mount Zion was destroyed and that the Cenacle was rebuilt in the later first century. The lowest courses of ashlars (building stones) along the north, east and south walls are attributed by Pinkerfeld to the late Roman period (135 - 325 CE). Pixner believes rather that they are Herodian - period ashlars, allowing him to date the construction of the building to an earlier period. Many scholars, however, date the walls ' earliest construction to the Byzantine period and identify the Cenacle as the remains of a no - longer - extant Hagia Sion ("Holy Zion '') basilica. The Roman emperor Theodosius I constructed the five - aisled Hagia Sion basilica likely between 379 and 381 CE. Despite the opinion of those scholars who would characterize the Cenacle as a remainder of Theodosius 's basilica, sixth - century artistic representations, such as the mosaics found in Madaba, Jordan (the "Madaba Map '') and the Basilica of Santa Maria Maggiore in Rome, depict a smaller structure just to the south of basilica. Some have identified this smaller structure as the Cenacle thus demonstrating its independence from, and possible prior existence to, the basilica. The basilica (and the Cenacle?) was later damaged by Persian invaders in 614 AD but restored by the patriarch Modestus. In AD 1009 the church was destroyed by the Muslim caliph Al - Hakim. Shortly afterward it was replaced by the Crusaders with a cathedral named for Saint Mary featuring a central nave and two side aisles. The Cenacle was either repaired or enclosed by the Crusader church, occupying a portion of two aisles on the right (southern) side of the altar. The Crusader cathedral was destroyed soon afterward, in the late 12th or early 13th century, but the Cenacle remained. (Today, part of the site upon which the Byzantine and Crusader churches stood is believed to be occupied by the smaller Church of the Dormition and its associated Abbey.) Syrian Christians maintained the Cenacle until the 1330s when it passed into the custody of the Franciscan Order of Friars who managed the structure until 1524. At that time Ottoman authorities took possession of the Cenacle converting it into a mosque. The Franciscans were completely evicted from their surrounding buildings in 1550. Architectural evidence remains of the period of Muslim control including the elaborate mihrab in the Last Supper room, the Arabic inscriptions on its walls, the qubba over the stairwell, and the minaret and dome atop the roof. Christians were not officially allowed to return until the establishment of the State of Israel in 1948. The historical building is currently managed by the State of Israel Ministry of the Interior.
Scholars offer wide - ranging dates and builders for the surviving Gothic - style Cenacle. Some believe that it was constructed by Crusaders just before Saladin 's conquest of Jerusalem in 1187, while others attribute it to Holy Roman Emperor Frederick II, after he arrived in the city in 1229. Still others hold that it was not built in this form until the Franciscans acquired the site in the 1330s. Scarce documentation and disturbed structural features offer little strong support for any of these dates.
In its current state, the Cenacle is divided into six rib - vaulted bays. The bays are supported by three freestanding columns which bilaterally divide the space, as well as six pillars flanking the side walls. While the capital of the westernmost freestanding column is flush with the Cenacle 's interior wall, the column shaft itself is completely independent of the wall, leading scholars to consider the possibility that this wall was not original to the building.
An analysis of the column and pillar capitals offers clues, but not a solution, to the mystery of the current building 's origin. The Corinthianesque capital between the second and third bays of the Cenacle is stylistically indicative of multiple geographical regions and chronological periods. This capital 's spiky leaves, which tightly adhere to the volume of the column before erupting into scrolls, are in congruence with common outputs of the 12th century sculpture workshop at the Temple site in Jerusalem in the last years before Saladin 's conquest in 1187. The workshop also frequently utilized drilling as an ornamental device. The Jerusalem workshop included artists from diverse regions in the West, who brought stylistic traits with them from their native countries. The workshop produced sculpture for many Crusader projects and other structures, such as the al - Aqsa mosque.
This comparison allows for the support of the 12th century date of the Cenacle. There are also, however, similar capitals which originated in workshops in southern Italy, a draw for scholars who wish to associate the building with Holy Roman Emperor Frederick II and the Sixth Crusade in 1229. Examples can be seen in the Romanesque cathedral in Bitonto, a small city near Bari, in southern Italy, and on column supports of the pulpit in the Pisa Baptistery, carved by Apulian - born sculptor Nicola Pisano around 1260.
The capitals of the freestanding columns are not identical. The capital between the first and second bays seems either severely weathered or shallowly carved, and its volume is a marked contrast from the others. It rises from the shaft in a straight cylinder, rather than in an inverted pyramid, and then flares only just before it intersects with the abacus. The third capital, which now flanks the Cenacle 's western wall, is also unique among the three. It is not decorated with a floral motif, rather, scrolling crockets spring from the base of the volume. Enlart has proposed a comparison to buildings constructed by Frederick II in Apulia.
Analysis of these column capitals does not yield significant evidence to link them to the 14th century and a potential Franciscan construction, nor does it definitively date them to the 12th or 13th century. The building remains a frustrating, but intriguing, mystery.
The upper room is a focus or reference in several Christian hymns, for example in "An upper room did our Lord prepare '', written by Fred Pratt Green in 1973, and in "Come, risen Lord, and deign to be our guest '' (' We meet, as in that upper room they met... '), written by George Wallace Briggs.
St. Mark 's Monastery in the Old City of Jerusalem near the Armenian Quarter is sometime considered as alternative place for the cenacle. The monastery church, belonging to the Syriac Orthodox Church, contains an early Christian stone inscription testifying to reverence for the spot.
|
much of the switching of denominations in the united states and canada is due to | Religion in the United States - Wikipedia
Religion in the United States is characterized by a diversity of religious beliefs and practices. Various religious faiths have flourished within the United States. A majority of Americans report that religion plays a very important role in their lives, a proportion unique among developed countries.
Historically, the United States has always been marked by religious pluralism and diversity, beginning with various native beliefs of the pre-colonial time. In colonial times, Anglicans, Catholics and mainline Protestants, as well as Jews, arrived from Europe. Eastern Orthodoxy has been present since the Russian colonization of Alaska. Various dissenting Protestants, who left the Church of England, greatly diversified the religious landscape. The Great Awakenings gave birth to multiple Evangelical Protestant denominations; membership in Methodist and Baptist churches increased drastically in the Second Great Awakening. In the 18th century, deism found support among American upper classes and thinkers. The Episcopal Church, splitting from the Church of England, came into being in the American Revolution. New Protestant branches like Adventism emerged; Restorationists and other Christians like the Jehovah 's Witnesses, the Latter Day Saint movement, Churches of Christ and Church of Christ, Scientist, as well as Unitarian and Universalist communities all spread in the 19th century. Pentecostalism emerged in the early 20th century as a result of the Azusa Street Revival. Scientology emerged in the 1950s. Unitarian Universalism resulted from the merge of Unitarian and Universalist churches in the 20th century. Beginning in 1990s, the religious share of Christians is decreasing due to secularization, while Buddhism, Hinduism, Islam, and other religions are spreading. Protestantism, historically dominant, ceased to be the religious category of the majority in the early 2010s.
The majority of U.S. adults self - identify as Christians, while close to a quarter claim no religious affiliation. According to a 2014 study by the Pew Research Center, 70.6 % of the adult population identified themselves as Christians, with 46.5 % professing attendance at a variety of churches that could be considered Protestant, and 20.8 % professing Catholic beliefs. The same study says that other religions (including Judaism, Buddhism, Hinduism, and Islam) collectively make up about 6 % of the population. According to a 2012 survey by the Pew forum, 36 % of U.S. adults state that they attend services nearly every week or more. According to a 2016 Gallup poll, Mississippi (with 63 % of its adult population described as very religious, saying that religion is important to them and attending religious services almost every week) is the most religious state in the country, while New Hampshire (with only 20 % of its adult population described as very religious) is the least religious state.
From early colonial days, when some English and German settlers came in search of religious freedom, America has been profoundly influenced by religion. That influence continues in American culture, social life, and politics. Several of the original Thirteen Colonies were established by settlers who wished to practice their own religion within a community of like - minded people: the Massachusetts Bay Colony was established by English Puritans (Congregationalists), Pennsylvania by British Quakers, Maryland by English Catholics, and Virginia by English Anglicans. Despite these, and as a result of intervening religious strife and preference in England the Plantation Act 1740 would set official policy for new immigrants coming to British America until the American Revolution.
The text of the First Amendment to the country 's Constitution states that "Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances. '' It guarantees the free exercise of religion while also preventing the government from establishing a state religion. However the states were not bound by the provision and as late as the 1830s Massachusetts provided tax money to local Congregational churches. The Supreme Court since the 1940s has interpreted the Fourteenth Amendment as applying the First Amendment to the state and local governments.
President John Adams and a unanimous Senate endorsed the Treaty of Tripoli in 1797 that stated: "the Government of the United States of America is not, in any sense, founded on the Christian religion. ''
Going forward from its foundation, the United States has been called a Protestant nation by a variety of sources.
According to a 2002 survey by the Pew Research Center, nearly 6 in 10 Americans said that religion plays an important role in their lives, compared to 33 % in Great Britain, 27 % in Italy, 21 % in Germany, 12 % in Japan, and 11 % in France. The survey report stated that the results showed America having a greater similarity to developing nations (where higher percentages say that religion plays an important role) than to other wealthy nations, where religion plays a minor role.
In 1963, 90 % of U.S. adults claimed to be Christian while only 2 % professed no religious identity. In 2014, close to 70 % identify as Christian while close to 23 % claim no religious identity.
The United States federal government was the first national government to have no official state - endorsed religion. However, some states had established religions in some form until the 1830s.
Modeling the provisions concerning religion within the Virginia Statute for Religious Freedom, the framers of the Constitution rejected any religious test for office, and the First Amendment specifically denied the federal government any power to enact any law respecting either an establishment of religion or prohibiting its free exercise, thus protecting any religious organization, institution, or denomination from government interference. The decision was mainly influenced by European Rationalist and Protestant ideals, but was also a consequence of the pragmatic concerns of minority religious groups and small states that did not want to be under the power or influence of a national religion that did not represent them.
The most popular religion in the U.S. is Christianity, comprising the majority of the population (70.6 % of adults in 2014). According to the Association of Statisticians of American Religious Bodies newsletter published March 2017, based on data from 2010, Christians were the largest religious population in all 3,143 counties in the country. Roughly 46.5 % of Americans are Protestants, 20.8 % are Catholics, 1.6 % are Mormons (the name commonly used to refer to members of The Church of Jesus Christ of Latter - day Saints), and 1.7 % have affiliations with various other Christian denominations. Christianity was introduced during the period of European colonization.
According to a 2012 review by the National Council of Churches, the five largest denominations are:
The Southern Baptist Convention, with over 16 million adherents, is the largest of more than 200 distinctly named Protestant denominations. In 2007, members of evangelical churches comprised 26 % of the American population, while another 18 % belonged to mainline Protestant churches, and 7 % belonged to historically black churches.
A 2015 study estimates some 450,000 Christian believers from a Muslim background in the country, most of them belonging to some form of Protestantism. In 2010 there were approximately 180,000 Arab Americans and about 130,000 Iranian Americans who converted from Islam to Christianity. Dudley Woodbury, a Fulbright scholar of Islam, estimates that 20,000 Muslims convert to Christianity annually in the United States.
Historians agree that members of mainline Protestant denominations have played leadership roles in many aspects of American life, including politics, business, science, the arts, and education. They founded most of the country 's leading institutes of higher education. According to Harriet Zuckerman, 72 % of American Nobel Prize Laureates between 1901 and 1972, have identified from Protestant background.
Episcopalians and Presbyterians tend to be considerably wealthier and better educated than most other religious groups, and numbers of the most wealthy and affluent American families as the Vanderbilts and Astors, Rockefeller, Du Pont, Roosevelt, Forbes, Whitneys, Morgans and Harrimans are Mainline Protestant families, though those affiliated with Judaism are the wealthiest religious group in the United States and those affiliated with Catholicism, owing to sheer size, have the largest number of adherents of all groups in the top income bracket.
Some of the first colleges and universities in America, including Harvard, Yale, Princeton, Columbia, Dartmouth, Williams, Bowdoin, Middlebury, and Amherst, all were founded by mainline Protestant denominations. By the 1920s most had weakened or dropped their formal connection with a denomination. James Hunter argues that:
Beginning around 1600 European settlers introduced Anglican and Puritans religion, as well as Baptist, Presbyterian, Lutheran, Quaker, and Moravian denominations.
Beginning in the 16th century, the Spanish (and later the French and English) introduced Catholicism. From the 19th century to the present, Catholics came to the US in large numbers due to immigration of Italians, Hispanics, Portuguese, French, Polish, Irish, Highland Scots, Dutch, Flemish, Hungarians, Germans, Lebanese (Maronite), and other ethnic groups.
Eastern Orthodoxy was brought to America by Greek, Ukrainian, Armenian, and other immigrant groups.
Several Christian groups were founded in America during the Great Awakenings. Interdenominational evangelicalism and Pentecostalism emerged; new Protestant denominations such as Adventism; non-denominational movements such as the Restoration Movement (which over time separated into the Churches of Christ, the Christian churches and churches of Christ, and the Christian Church (Disciples of Christ)); Jehovah 's Witnesses (called "Bible Students '' in the latter part of the 19th century); and The Church of Jesus Christ of Latter - day Saints (Mormonism).
The strength of various sects varies greatly in different regions of the country, with rural parts of the South having many evangelicals but very few Catholics (except Louisiana and the Gulf Coast, and from among the Hispanic community, both of which consist mainly of Catholics), while urbanized areas of the north Atlantic states and Great Lakes, as well as many industrial and mining towns, are heavily Catholic, though still quite mixed, especially due to the heavily Protestant African - American communities. In 1990, nearly 72 % of the population of Utah was Mormon, as well as 26 % of neighboring Idaho. Lutheranism is most prominent in the Upper Midwest, with North Dakota having the highest percentage of Lutherans (35 % according to a 2001 survey).
The largest religion, Christianity, has proportionately diminished since 1990. While the absolute number of Christians rose from 1990 to 2008, the percentage of Christians dropped from 86 % to 76 %. A nationwide telephone interview of 1,002 adults conducted by The Barna Group found that 70 % of American adults believe that God is "the all - powerful, all - knowing creator of the universe who still rules it today '', and that 9 % of all American adults and 0.5 % young adults hold to what the survey defined as a "biblical worldview ''.
Episcopalian, Presbyterian, Eastern Orthodox and United Church of Christ members have the highest number of graduate and post-graduate degrees per capita of all Christian denominations in the United States, as well as the most high - income earners. However, owing to the sheer size or demographic head count of Catholics, more individual Catholics have graduate degrees and are in the highest income brackets than have or are individuals of any other religious community.
After Christianity, Judaism is the next largest religious affiliation in the US, though this identification is not necessarily indicative of religious beliefs or practices. There are between 5.3 and 6.6 million Jews. A significant number of people identify themselves as American Jews on ethnic and cultural grounds, rather than religious ones. For example, 19 % of self - identified American Jews do not believe God exists. The 2001 ARIS study projected from its sample that there are about 5.3 million adults in the American Jewish population: 2.83 million adults (1.4 % of the U.S. adult population) are estimated to be adherents of Judaism; 1.08 million are estimated to be adherents of no religion; and 1.36 million are estimated to be adherents of a religion other than Judaism. ARIS 2008 estimated about 2.68 million adults (1.2 %) in the country identify Judaism as their faith.
Jews have been present in what is now the US since the 17th century, and specifically allowed since the British colonial Plantation Act 1740. Although small Western European communities initially developed and grew, large - scale immigration did not take place until the late 19th century, largely as a result of persecutions in parts of Eastern Europe. The Jewish community in the United States is composed predominantly of Ashkenazi Jews whose ancestors emigrated from Central and Eastern Europe. There are, however, small numbers of older (and some recently arrived) communities of Sephardi Jews with roots tracing back to 15th century Iberia (Spain, Portugal, and North Africa). There are also Mizrahi Jews (from the Middle East, Caucasia and Central Asia), as well as much smaller numbers of Ethiopian Jews, Indian Jews, Kaifeng Jews and others from various smaller Jewish ethnic divisions. Approximately 25 % of the Jewish American population lives in New York City.
According to the Association of Statisticians of American Religious Bodies newsletter published March, 2017, based on data from 2010, Jews were the largest minority religion in 231 counties out of the 3143 counties in the country. According to a 2014 survey conducted by the Pew Forum on Religion and Public life, 1.7 % of adults in the U.S. identify Judaism as their religion. Among those surveyed, 44 % said they were Reform Jews, 22 % said they were Conservative Jews, and 14 % said they were Orthodox Jews. According to the 1990 National Jewish Population Survey, 38 % of Jews were affiliated with the Reform tradition, 35 % were Conservative, 6 % were Orthodox, 1 % were Reconstructionists, 10 % linked themselves to some other tradition, and 10 % said they are "just Jewish ''.
The Pew Research Center report on American Judaism released in October 2013 revealed that 22 % of Jewish Americans say they have "no religion '' and the majority of respondents do not see religion as the primary constituent of Jewish identity. 62 % believe Jewish identity is based primarily in ancestry and culture, only 15 % in religion. Among Jews who gave Judaism as their religion, 55 % based Jewish identity on ancestry and culture, and 66 % did not view belief in God as essential to Judaism.
A 2009 study estimated the Jewish population (including both those who define themselves as Jewish by religion and those who define themselves as Jewish in cultural or ethnic terms) to be between 6.0 and 6.4 million. According to a study done in 2000 there were an estimated 6.14 million Jewish people in the country, about 2 % of the population.
According to the 2001 National Jewish Population Survey, 4.3 million American Jewish adults have some sort of strong connection to the Jewish community, whether religious or cultural. Jewishness is generally considered an ethnic identity as well as a religious one. Among the 4.3 million American Jews described as "strongly connected '' to Judaism, over 80 % have some sort of active engagement with Judaism, ranging from attendance at daily prayer services on one end of the spectrum to attending Passover Seders or lighting Hanukkah candles on the other. The survey also discovered that Jews in the Northeast and Midwest are generally more observant than Jews in the South or West. Reflecting a trend also observed among other religious groups, Jews in the Northwestern United States are typically the least observant of tradition.
The Jewish American community has higher household incomes than average, and is one of the best educated religious communities in the United States.
Islam is the third largest faith in the United States, after Christianity and Judaism, representing 0.9 % of the population. According to the Association of Statisticians of American Religious Bodies newsletter published March, 2017, based on data from 2010, Muslims were the largest minority religion in 392 counties out of the 3143 counties in the country. Islam in America effectively began with the arrival of African slaves. It is estimated that about 10 % of African slaves transported to the United States were Muslim. Most, however, became Christians, and the United States did not have a significant Muslim population until the arrival of immigrants from Arab and East Asian Muslim areas. According to some experts, Islam later gained a higher profile through the Nation of Islam, a religious group that appealed to black Americans after the 1940s; its prominent converts included Malcolm X and Muhammad Ali. The first Muslim elected in Congress was Keith Ellison in 2006, followed by André Carson in 2008.
Research indicates that Muslims in the United States are generally more assimilated and prosperous than their counterparts in Europe. Like other subcultural and religious communities, the Islamic community has generated its own political organizations and charity organizations.
The United States has perhaps the second largest Bahá'í community in the world. First mention of the faith in the U.S. was at the inaugural Parliament of World Religions, which was held at the Columbian Exposition in Chicago in 1893. In 1894, Ibrahim George Kheiralla, a Syrian Bahá'í immigrant, established a community in the U.S. He later left the main group and founded a rival movement. According to the Association of Statisticians of American Religious Bodies newsletter published March, 2017, based on data from 2010, Bahá'ís were the largest minority religion in 80 counties out of the 3143 counties in the country.
Rastafarians began migrating to the United States in the 1950s, ' 60s and ' 70s from the religion 's 1930s birthplace, Jamaica. Marcus Garvey, who is considered a prophet by many Rastafarians, rose to prominence and cultivated many of his ideas in the United States.
Buddhism entered the US during the 19th century with the arrival of the first immigrants from East Asia. The first Buddhist temple was established in San Francisco in 1853 by Chinese Americans.
During the late 19th century Buddhist missionaries from Japan came to the US. During the same time period, US intellectuals started to take interest in Buddhism.
The first prominent US citizen to publicly convert to Buddhism was Henry Steel Olcott in 1880 who is still honored in Sri Lanka for these efforts. An event that contributed to the strengthening of Buddhism in the US was the Parliament of the World 's Religions in 1893, which was attended by many Buddhist delegates sent from India, China, Japan, Vietnam, Thailand and Sri Lanka.
The early 20th century was characterized by a continuation of tendencies that had their roots in the 19th century. The second half, by contrast, saw the emergence of new approaches, and the move of Buddhism into the mainstream and making itself a mass and social religious phenomenon.
Estimates of the number of Buddhists in the United States vary between 0.5 % and 0.9 %, with 0.7 % reported by both the CIA and Pew. According to the Association of Statisticians of American Religious Bodies newsletter published March, 2017, based on data from 2010, Buddhists were the largest minority religion in 186 counties out of the 3143 counties in the country.
Hinduism is the fourth largest faith in the United States, representing 0.7 % of the population. The first time Hinduism entered the U.S. is not clearly identifiable. However, large groups of Hindus have immigrated from India and other Asian countries since the enactment of the Immigration and Nationality Act of 1965. During the 1960s and 1970s Hinduism exercised fascination contributing to the development of New Age thought. During the same decades the International Society for Krishna Consciousness (a Vaishnavite Hindu reform organization) was founded in the US.
In 2001, there were an estimated 766,000 Hindus in the US, about 0.2 % of the total population. According to the Association of Statisticians of American Religious Bodies newsletter published March, 2017, based on data from 2010, Hindus were the largest minority religion in 92 counties out of the 3143 counties in the country.
In 2004 the Hindu American Foundation -- a national institution protecting rights of the Hindu community of U.S. -- was founded.
American Hindus have one of the highest rates of educational attainment and household income among all religious communities, and tend to have lower divorce rates.
Adherents of Jainism first arrived in the United States in the 20th century. The most significant time of Jain immigration was in the early 1970s. The United States has since become a center of the Jain Diaspora. The Federation of Jain Associations in North America is an umbrella organization of local American and Canadian Jain congregations to preserve, practice, and promote Jainism and the Jain way of life.
Sikhism is a religion originating from South Asia (predominantly in modern - day India) which was introduced into the United States when, around the turn of the 20th century, Sikhs started emigrating to the United States in significant numbers to work on farms in California. They were the first community to come from India to the US in large numbers. The first Sikh Gurdwara in America was built in Stockton, California, in 1912. In 2007, there were estimated to be between 250,000 and 500,000 Sikhs living in the United States, with the largest populations living on the East and West Coasts, with additional populations in Detroit, Chicago, and Austin.
The United States also has a number of non-Punjabi converts to Sikhism.
In 2004 there were an estimated 56,000 Taoists in the US. Taoism was popularized throughout the world through the writings and teachings of Lao Tzu and other Taoists as well as the practice of Qigong, Tai Chi Chuan and other Chinese martial arts.
This group includes atheists, agnostics and people who describe their religion as "nothing in particular ''.
"Unaffiliated '' does not necessarily mean "non-religious ''. Some people who are unaffiliated with any particular religion express religious beliefs (such as belief in one or more gods or in reincarnation) and engage in religious practices (such as prayer).
A 2001 survey directed by Dr. Ariela Keysar for the City University of New York indicated that, amongst the more than 100 categories of response, "no religious identification '' had the greatest increase in population in both absolute and percentage terms. This category included atheists, agnostics, humanists, and others with no stated religious preferences. Figures are up from 14.3 million in 1990 to 34.2 million in 2008, representing an increase from 8 % of the total population in 1990 to 15 % in 2008. A nationwide Pew Research study published in 2008 put the figure of unaffiliated persons at 16.1 %, while another Pew study published in 2012 was described as placing the proportion at about 20 % overall and roughly 33 % for the 18 -- 29 - year - old demographic.
In a 2006 nationwide poll, University of Minnesota researchers found that despite an increasing acceptance of religious diversity, atheists were generally distrusted by other Americans, who trusted them less than Muslims, recent immigrants and other minority groups in "sharing their vision of American society ''. They also associated atheists with undesirable attributes such as amorality, criminal behavior, rampant materialism and cultural elitism. However, the same study also reported that "The researchers also found acceptance or rejection of atheists is related not only to personal religiosity, but also to one 's exposure to diversity, education and political orientation -- with more educated, East and West Coast Americans more accepting of atheists than their Midwestern counterparts. '' Some surveys have indicated that doubts about the existence of the divine were growing quickly among Americans under 30.
On 24 March 2012, American atheists sponsored the Reason Rally in Washington, D.C., followed by the American Atheist Convention in Bethesda, Maryland. Organizers called the estimated crowd of 8,000 -- 10,000 the largest - ever US gathering of atheists in one place.
In the United States, Enlightenment philosophy (which itself was heavily inspired by deist ideals) played a major role in creating the principle of religious freedom, expressed in Thomas Jefferson 's letters and included in the First Amendment to the United States Constitution. American Founding Fathers, or Framers of the Constitution, who were especially noted for being influenced by such philosophy of deism include Thomas Jefferson, Benjamin Franklin, Cornelius Harnett, Gouverneur Morris, and Hugh Williamson. Their political speeches show distinct deistic influence. Other notable Founding Fathers may have been more directly deist. These include Thomas Paine, James Madison, possibly Alexander Hamilton, and Ethan Allen.
Various polls have been conducted to determine Americans ' actual beliefs regarding a god:
"Spiritual but not religious '' (SBNR) is self - identified stance of spirituality that takes issue with organized religion as the sole or most valuable means of furthering spiritual growth. Spirituality places an emphasis upon the wellbeing of the "mind - body - spirit, '' so holistic activities such as tai chi, reiki, and yoga are common within the SBNR movement. In contrast to religion, spirituality has often been associated with the interior life of the individual.
One fifth of the US public and a third of adults under the age of 30 are reportedly unaffiliated with any religion, however they identify as being spiritual in some way. Of these religiously unaffiliated Americans, 37 % classify themselves as spiritual but not religious.
Many other religions are represented in the United States, including Shinto, Caodaism, Thelema, Santería, Kemetism, Religio Romana, Kaldanism, Zoroastrianism, Vodou, Pastafarianism, and many forms of New Age spirituality.
Native American religions historically exhibited much diversity, and are often characterized by animism or panentheism. The membership of Native American religions in the 21st century comprises about 9,000 people.
Neopaganism in the United States is represented by widely different movements and organizations. The largest Neopagan religion is Wicca, followed by Neo-Druidism. Other neopagan movements include Germanic Neopaganism, Celtic Reconstructionist Paganism, Hellenic Polytheistic Reconstructionism, and Semitic neopaganism.
According to the American Religious Identification Survey (ARIS), there are approximately 30,000 druids in the United States. Modern Druidism came to North America first in the form of fraternal Druidic organizations in the nineteenth century, and orders such as the Ancient Order of Druids in America were founded as distinct American groups as early as 1912. In 1963, the Reformed Druids of North America (RDNA) was founded by students at Carleton College, Northfield, Minnesota. They adopted elements of Neopaganism into their practices, for instance celebrating the festivals of the Wheel of the Year.
Wicca advanced in North America in the 1960s by Raymond Buckland, an expatriate Briton who visited Gardner 's Isle of Man coven to gain initiation. Universal Eclectic Wicca was popularized in 1969 for a diverse membership drawing from both Dianic and British Traditional Wiccan backgrounds.
A group of churches which started in the 1830s in the United States is known under the banner of "New Thought ''. These churches share a spiritual, metaphysical and mystical predisposition and understanding of the Bible and were strongly influenced by the Transcendentalist movement, particularly the work of Ralph Waldo Emerson. Another antecedent of this movement was Swedenborgianism, founded on the writings of Emanuel Swedenborg in 1787. The New Thought concept was named by Emma Curtis Hopkins ("teacher of teachers '') after Hopkins broke off from Mary Baker Eddy 's Church of Christ, Scientist. The movement had been previously known as the Mental Sciences or the Christian Sciences. The three major branches are Religious Science, Unity Church and Divine Science.
Unitarian Universalists (UU 's) are among the most liberal of all religious denominations in America. The shared creed includes beliefs in inherent dignity, a common search for truth, respect for beliefs of others, compassion, and social action. They are unified by their shared search for spiritual growth and by the understanding that an individual 's theology is a result of that search and not obedience to an authoritarian requirement.. UU 's have historical ties to anti-war, civil rights, and LGBT rights movements, as well as providing inclusive church services for the broad spectrum of liberal Christians, liberal Jews, secular humanists, LGBT, Jewish - Christian parents and partners, Earth - centered / Wicca, and Buddhist meditation adherents.
The First Amendment guarantees both the free practice of religion and the non-establishment of religion by the federal government (later court decisions have extended that prohibition to the states). The U.S. Pledge of Allegiance was modified in 1954 to add the phrase "under God '', in order to distinguish itself from the state atheism espoused by the Soviet Union.
Various American presidents have often stated the importance of religion. On February 20, 1955, President Dwight D. Eisenhower stated that "Recognition of the Supreme Being is the first, the most basic, expression of Americanism. '' President Gerald Ford agreed with and repeated this statement in 1974.
The U.S. Census does not ask about religion. Various groups have conducted surveys to determine approximate percentages of those affiliated with each religious group.
Religion in the United States according to Gallup, Inc. (2016)
Religion in the United States according to the Pew Research Center (2014)
A 2013 survey reported that 31 % of Americans attend religious services at least weekly. It was conducted by the Public Religion Research Institute with a margin of error of 2.5.
In 2006, an online Harris Poll (they stated that the magnitude of errors can not be estimated due to sampling errors, non-response, etc.; 2,010 U.S. adults were surveyed) found that 26 % of those surveyed attended religious services "every week or more often '', 9 % went "once or twice a month '', 21 % went "a few times a year '', 3 % went "once a year '', 22 % went "less than once a year '', and 18 % never attend religious services.
In a 2009 Gallup International survey, 41.6 % of American citizens said that they attended a church, synagogue, or mosque once a week or almost every week. This percentage is higher than other surveyed Western countries. Church attendance varies considerably by state and region. The figures, updated to 2014, ranged from 51 % in Utah to 17 % in Vermont.
In August 2010, 67 % of Americans said religion was losing influence, compared with 59 % who said this in 2006. Majorities of white evangelical Protestants (79 %), white mainline Protestants (67 %), black Protestants (56 %), Catholics (71 %), and the religiously unaffiliated (62 %) all agreed that religion was losing influence on American life; 53 % of the total public said this was a bad thing, while just 10 % see it as a good thing.
Politicians frequently discuss their religion when campaigning, and fundamentalists and black Protestants are highly politically active. However, to keep their status as tax - exempt organizations they must not officially endorse a candidate. Historically Catholics were heavily Democratic before the 1970s, while mainline Protestants comprised the core of the Republican Party. Those patterns have faded away -- Catholics, for example, now split about 50 -- 50. However, white evangelicals since 1980 have made up a solidly Republican group that favors conservative candidates. Secular voters are increasingly Democratic.
Only three presidential candidates for major parties have been Catholics, all for the Democratic party:
Joe Biden is the first Catholic vice president.
Joe Lieberman was the first major presidential candidate that was Jewish, on the Gore - Lieberman campaign of 2000 (although John Kerry and Barry Goldwater both had Jewish ancestry, they were practicing Christians). Bernie Sanders ran against Hillary Clinton in the Democratic primary of 2016. He was the first major Jewish candidate to compete in the presidential primary process. However, Sanders noted during the campaign that he does not actively practice any religion.
In 2006 Keith Ellison of Minnesota became the first Muslim elected to Congress; when re-enacting his swearing - in for photos, he used the copy of the Qur'an once owned by Thomas Jefferson. André Carson is the second Muslim to serve in Congress.
A Gallup poll released in 2007 indicated that 53 % of Americans would refuse to vote for an atheist as president, up from 48 % in 1987 and 1999. But then the number started to drop again and reached record low 43 % in 2012 and 40 % in 2015.
Mitt Romney, the Republican presidential nominee in 2012, is Mormon and a member of The Church of Jesus Christ of Latter - day Saints. He is the former governor of the state of Massachusetts, and his father George Romney was the governor of the state of Michigan. The Romneys were involved in Mormonism in their states and in the state of Utah.
On January 3, 2013, Tulsi Gabbard became the first Hindu member of Congress, using a copy of the Bhagavad Gita while swearing - in.
The table below is based mainly on data reported by individual denominations to the Yearbook of American and Canadian Churches, and published in 2011 by the National Council of Churches of Christ in USA. It only includes religious bodies reporting 60,000 or more members. The definition of a member is determined by each religious body.
The Association of Religion Data Archives (ARDA) surveyed congregations for their memberships. Churches were asked for their membership numbers. Adjustments were made for those congregations that did not respond and for religious groups that reported only adult membership. ARDA estimates that most of the churches not responding were black Protestant congregations. Significant difference in results from other databases include the lower representation of adherents of 1) all kinds (62.7 %), 2) Christians (59.9 %), 3) Protestants (less than 36 %); and the greater number of unaffiliated (37.3 %).
The United States government does not collect religious data in its census. The survey below, the American Religious Identification Survey (ARIS) of 2008, was a random digit - dialed telephone survey of 54,461 American residential households in the contiguous United States. The 1990 sample size was 113,723; 2001 sample size was 50,281.
Adult respondents were asked the open - ended question, "What is your religion, if any? '' Interviewers did not prompt or offer a suggested list of potential answers. The religion of the spouse or partner was also asked. If the initial answer was "Protestant '' or "Christian '' further questions were asked to probe which particular denomination. About one third of the sample was asked more detailed demographic questions.
Religious Self - Identification of the U.S. Adult Population: 1990, 2001, 2008 Figures are not adjusted for refusals to reply; investigators suspect refusals are possibly more representative of "no religion '' than any other group.
Highlights:
The table below shows the religious affiliations among the ethnicities in the United States, according to the Pew Forum 2014 survey. People of Black ethnicity were most likely to be part of a formal religion, with 85 % percent being Christians. Protestant denominations make up the majority of the Christians in the ethnicities.
|
who was eliminated on ink master last week | Ink Master (Season 10) - wikipedia
Ink Master: Return of the Masters is the tenth season of the tattoo reality competition Ink Master that premiered on January 9, 2018 at 10 / 9c. The first two episodes of the season marked the series ' last episodes to air on Spike prior to the network 's transition to the Paramount Network nine days later on January 18. Despite this, the remaining episodes will continue to air on the new channel. Host Dave Navarro returned alongside co-judges Oliver Peck and Chris Nunez.
Season 10 uses the same format from Season 8. But this time, six artists each were led by three Ink Master winners which includes Steve Tefft from Season 2, Anthony Michaels from Season 7, and Old Town Ink 's DJ Tambe from Season 9 as 24 artists compete to be one of the 18 artists which will then narrow down to the team 's six members who were drafted through the first day of competition. Several confirmed contestants competing in this season have already secured a spot for the first day of competition following their respective angel face off win in the spin - off Ink Master: Angels. Those were Deanna Smith, Sparks, Daniel Silva, Rachel Helmich and Tim Furlow. But neither Helmich nor Furlow were chosen to be on one of the teams. The live finale will feature the captains going head - to - head in the first ever Master Face - Off. The winning artist for Season 10 will receive $100,000, a feature in Inked magazine and the title of "Ink Master. ''
The judge listing types are:
24 artists arrive to Coney Island where they had six hours to impress the masters by tattooing a design in the style and subject of their choice. The judges and the masters then judged each tattoo in a blind critique without knowing who did what.
With nine artists remaining, they must compete in a marathon where they have to tattoo a different creature of land, sea or sky in each round.
Note: This episode was released online ahead of its respective airdate.
|
who is doing the half time show superbowl 2018 | Super Bowl LII halftime show - Wikipedia
The Super Bowl LII Halftime Show (officially known as the Pepsi Super Bowl LII Halftime Show) took place on February 4, 2018 at U.S. Bank Stadium in Minneapolis, Minnesota, as part of Super Bowl LII. Justin Timberlake was the featured performer, as confirmed by the National Football League (NFL) on October 22, 2017. It was televised nationally by NBC.
The show began with Jimmy Fallon introducing Justin Timberlake, followed by a video screen depicting Timberlake performing "Filthy '' in a club setting below the field level of the stadium. He then walked up a staircase and appeared on a ramp stage extending outward into the field, descending into a series of stages surrounded by a crowd. Timberlake proceeded to move through the crowd performing "Rock Your Body '' with a troupe of female backup dancers, abruptly stopping short of the end of the song and shifting to "Señorita '' on a small stage with his backing dancers. Upon reaching the main stage, he performed a number of songs, including "SexyBack '', "My Love '', and "Cry Me a River '', which featured a dance break mid-field. Upon reaching the next stage, Timberlake performed his hit song "Suit & Tie '' as the University of Minnesota Marching Band, wearing black tuxedos, played backup instrumentals and marched out to meet him.
Timberlake proceeded to walk up to a white grand piano while performing "Until the End of Time '', then segued into "I Would Die 4 U '' as a tribute to Minneapolis - native Prince. A video of Prince performing the song played in the background, projected onto on a large multi-story sheet. An aerial shot showed downtown Minneapolis covered in purple lighting that morphed into Prince 's trademark Love Symbol, with the stadium at the center. He then returned to the main stage to perform "Mirrors '', as hundreds of dancers and members of the marching band performed choreography with large mirrors, creating bright reflections in the broadcast and across the roof of the stadium. Timberlake closed the show with "Ca n't Stop the Feeling! '', entering the stands at the conclusion of the song.
For the first time since the Super Bowl XLVI halftime show in Indianapolis in 2012, no pyrotechnics were used throughout the performance. The show relied mostly on lasers and video screens for visual effects.
In July 2017, multiple media outlets reported that Britney Spears was in talks to perform at Halftime, but Pepsi quickly denied it a few days later. During August and September 2017, several publications informed that Timberlake was the frontrunner to performer at the Super Bowl LII halftime, first along with his frequent - collaborator Jay - Z as co-headliner, and then as the solo performer. A spokesperson from the NFL stated at the time, "along with Pepsi, we know that we will put on a spectacular show. When it is time to announce her name we will do it. Or his name. Or their names. '' The NFL confirmed the announcement on October 22 with a video starring Timberlake and Jimmy Fallon.
This was Timberlake 's third appearance in a Super Bowl halftime show. As a member of NSYNC, Timberlake appeared in the Super Bowl XXXV (2001) halftime show, and as a guest artist in the Super Bowl XXXVIII (2004), which performance featured a controversial incident where Timberlake accidentally exposed one of Janet Jackson 's breasts on national television, described as a wardrobe malfunction. The Parents Television Council penned an open letter to Timberlake asking to keep the performance "family - friendly. '' While the organization acknowledged that Timberlake apologized for the 2004 incident, they asked him to stay true to his word, saying "we are heartened by your response that the events of 2004 are not going to happen in 2018, '' as the singer stated in a prior interview that "we are not going to do that again. ''
In an interview with Billboard, Pepsi executives expressed:
We are all big fans of Justin Timberlake. We 've kind of felt that Justin deserves, and has for a number of years, to be the main artist for the halftime show because previously he was n't the main artist. It was just about the timing. To be honest, we have discussed Justin for the last number of years for coming and doing halftime, and this year just felt really right to us. He is hands down one of the greatest entertainers currently alive, it was a no - brainer. We know he 's gon na bring it.
The Halftime Show included a remembrance for Indianapolis Colts linebacker Edwin Jackson, who died just hours before Super Bowl LII after being struck by a vehicle.
During the performance, Timberlake wore an outfit designed by Stella McCartney, which consists of "alter nappa fringed jacket with a shirt, featuring a landscape artwork by British artist Martin Ridley, '' according to a press release. Also part of the look is a Prince of Wales - check and camouflage splatter - print suit and matching jacket. As usual for McCartney, these pieces were made from animal - free leather and organic cotton.
Timberlake stated in a press conference that there would be no guest musicians in the halftime show and that the event would focus solely on himself and his backing band, the Tennessee Kids. Regarding the Prince tribute, the performance 's creative visual lead, Fireplay 's Nick Whitehouse, told Rolling Stone:
Paying tribute to Prince was something JT highlighted as an important moment for this show, and we spent quite a bit of time ensuring this moment would be true to his legacy. Ultimately, Justin decided that the only person who could do Prince justice is Prince. The band held 50 - hours worth of rehearsals in preparation for the show.
Prince had previously stated he did not want to be included in new music after death in a 1998 interview, citing The Beatles ' "Free as a Bird '' as an example of a practice he considered to be "demonic. '' His family granted permission to use Prince 's likeness on the condition that it not be used in a hologram, and they approved of the final result. Sheila E, a former bandmate of Prince 's who was involved in negotiations over the use of his likeness, stated that "a bigger company '' (she declined to specify whether it was Pepsi or the NFL) had insisted on including the Prince apparition and that the notion was not originally Timberlake 's idea.
Despite the lack of an individual guest artist, the more than 300 - member University of Minnesota Marching Band was featured in the show. The band 's drumline, brass, and saxophone sections pre-recorded and performed with Timberlake during his performance of "Suit & Tie. '' The upper woodwind and auxiliary sections led drill formations and held large mirrors during Timberlake 's performance of his song "Mirrors '', and acted as fans and dancers throughout other portions of the show, including the club scene at the show 's opening. All members of the band were featured on the field in the show 's finale, "Ca n't Stop the Feeling! ''. The band had previously performed in the halftime show of Super Bowl XXVI.
Timberlake 's performance received mixed reviews. In a positive review, Bruce R. Miller of Sioux City Journal wrote "Timberlake is a masterful live performer -- which made Sunday 's Super Bowl performance about the only sure bet, '' he continued commenting the performer "did a lot of infectious dancing and managed to play with the crowd like no other. '' Although it did not have a moment that "stuck, '' he considered the Prince tribute the best moment of the show. In a similarly positive review, Taylor Weatherby of Billboard said "there is no denying that Timberlake absolutely rocked his first headlining (halftime) '', further adding "Timberlake 's halftime show was undeniably mesmerizing. From starting in the concourse to making his way into the crowd (and making # SelfieKid an instant superstar) for the ending. '' She also considered it "is made for a TV experience '' rather than for the public in the stadium, mainly for the sound quality difficulties, but also criticized him for including Rock Your Body in the set list. From the same magazine, Nina Braca wrote "his moves were on point, '' and Andrew Ubterberger said two things were "relatively certain '' about the performance. "most of America would love it, and most of the Internet would hate it, '' and added Timberlake was "in a situation that was both a ca n't - lose and a ca n't - win. It would 've been virtually impossible for him to please the critics he 'd alienated over the last couple years. '' Also from Billboard, Andrew Unterberge wrote, "Timberlake 's audio was somewhat lacking throughout... but the choreography, live - band energy and song selection were all pretty impeccable ''.
Chris Willman of Variety stated that, "Timberlake turned in a more enjoyably physical performance than just about anybody else who 's done the Bowl show... and if it was more a feat of athleticism than aestheticism, you ca n't say that 's entirely inappropriate for the occasion. '' Willman also wrote that the show, "was n't one for the ages, but was impressive as a show of athleticism ''
Jon Caramanica of The New York Times wrote that Timberlake 's performance was, "heavy on dance spectacle, light on vocal authority ''. Daniel Fienberg of The Hollywood Reporter called the show, "energetic, but also entirely lacking in live excitement. '' Feinberg criticized the show for largely lacking spontaneity and live vocals. Feinberg wrote that Timberlake delivered, "one of the most over-planned, least surprising performances imaginable. '' Darren Franich of Entertainment Weekly graded Timberlake 's performance a "C '', calling it, "dutiful, and empty ''. Franich faulted Timberlake for playing too safe with his performance. Similarly, Fran Guan of Vulture.com wrote, "Technically speaking, Timberlake 's set was a testament to precision ''. Guan, however, criticized Timberlake 's performance from lacking in personality, and regarded his performance as unmemorable.
The Guardian gave Timberlake 3 out of 5 stars, calling his performance forgettable but flashy. In an interview with NPR, Ann Powers said that "the entire performance was shrouded in the sense of Timberlake not being right for this moment -- and the Janet Jackson controversy haunted it. '' Daniel D'Addarrio of Time.com gave the performance a negative review, criticizing Timberlake for singing Cry Me A River in addition to Rock Your Body, calling the song 's lyrics about an evil promiscuous woman out of step with the national mood, and said that the only message from Timberlake 's performance was that he loves his back catalog Deadline felt "but you could see the motions more than you felt the music. '' Chris Richards of The Washington Post regarded Timberlake 's performance as, "unambiguously underwhelming ''.
USA Today and Vulture compared Timberlake 's performance unfavorably to Prince 's own 2007 halftime show. Amanda Petrusich of The New Yorker wrote that Timberlake 's decision to omit the end of "Rock Your Body '' (which was performed during the controversial 2004 halftime) felt, "less like an apology than yet more spineless deflection ''. However, Andrew Unterberge of Billboard considered Timberlake 's decision to cut the song short to be wise. Timberlake 's Stella McCartney - designed outfit received negative reviews, with some critics regarding it as "tacky ''. The LA Times also gave a very critical review, one which also stated that Timberlake had nothing to say in his performance, and said that it lacked soul and meaning. The Digital Journal gave him 2 / 5 stars and called it lackluster.
The Super Bowl LII halftime show was seen by 106.6 million television viewers in the United States, 9 % less than Lady Gaga 's in 2017. It had higher average viewership than the game itself, and the decline for the halftime show was roughly in line with that of the game as a whole, which had lost 7 % compared to the previous year.
According to initial sales reports from Nielsen Music, sales of the songs Timberlake performed during the halftime show gained 534 % in the United States on February 4, the day of the Super Bowl, compared to Feb. 3, while his streams on Spotify gained 214 %. Janet Jackson likewise gained 150 % on Spotify.
|
when was the first swiss army knife made | Swiss Army knife - wikipedia
The Swiss Army knife is a pocketknife or multi-tool manufactured by Victorinox AG (and up to 2005 also by Wenger SA). The term "Swiss Army knife '' was coined by American soldiers after World War II due to the difficulty they had in pronouncing "Offiziersmesser '', the German name.
The Swiss Army knife generally has a main spearpoint blade, as well as various tools, such as screwdrivers, a can opener, and many others. These attachments are stowed inside the handle of the knife through a pivot point mechanism. The handle is usually in it 's stereotypical red color, and features a Victorinox or Wenger "cross '' logo or, for Swiss military issue knives, the coat of arms of Switzerland.
Originating in Ibach, Switzerland, the Swiss Army knife was first produced in 1891 after the company, Karl Elsener, which later became Victorinox, won the contract to produce the Swiss Army 's Modell 1890 knife from the previous German manufacturer. In 1893, the Swiss cutlery company Paul Boéchat & Cie, which later became Wenger, received its first contract from the Swiss military to produce model 1890 knives; the two companies split the contract for provision of the knives from 1908 until Victorinox acquired Wenger in 2005. A cultural icon of Switzerland, the design of the knife and its versatility have both led to worldwide recognition.
During the late 1880s, the Swiss Army decided to purchase a new folding pocket knife for their soldiers. This knife was to be suitable for use by the army in opening canned food and disassembling the Swiss service rifle, the Schmidt -- Rubin, which required a screwdriver for assembly.
The Swiss Army Knife was not the first multi use pocket knife. In 1851 in "Moby Dick '' (chapter 107), Melville references the "Sheffield contrivances, assuming the exterior - though a little swelled - of a common pocket knife; but containing, not only blades of various sizes, but also screw - drivers, cork - screws, tweezers, awls, pens, rulers, nail - filers, countersinkers. ''
In January 1891, the knife received the official designation Modell 1890. The knife had a blade, reamer, can - opener, screwdriver, and grips made out of dark oak wood that some say was later partly replaced with ebony wood. At that time no Swiss company had the necessary production capacity, so the initial order for 15,000 knives was placed with the German knife manufacturer Wester & Co. from Solingen, Germany. These knives were delivered in October 1891.
In 1891, Karl Elsener, then owner of a company that made surgical equipment, set out to manufacture the knives in Switzerland itself. At the end of 1891 Elsener began production of the Modell 1890 knives. Elsener then wanted to make a pocketknife more suitable to an Officer. In 1896, Elsener succeeded in attaching tools on both sides of the handle using a special spring mechanism: this allowed him to use the same spring to hold them in place, an innovation at the time. This allowed Elsener to put twice as many features on the knife. On 12 June 1897, this knife featuring a second smaller cutting blade, corkscrew, and wood fiber grips was originally registered with the patent office as The Officer 's and Sports Knife, though it was never part of a military contract.
Karl Elsener used the cross and shield to identify his knives, the symbol still used today on Victorinox - branded versions. When his mother died in 1909, Elsener decided to name his company "Victoria '' in her memory. In 1921 the company started using stainless steel to make the Swiss Army Knife. Stainless steel is also known as "inox '', short for the French term "acier inoxydable ''. "Victoria '' and "inox '' were then combined to create the company name "Victorinox ''. Victorinox 's headquarters and show room are located in the Swiss town of Ibach.
Elsener, through his company Victorinox, managed to control the market until 1893, when the second industrial cutler of Switzerland, Paul Boéchat & Cie, headquartered in Delémont in the French - speaking region of Jura, started selling a similar product. This company was later acquired by its then General Manager, Théodore Wenger, and renamed the Wenger Company. In 1908 the Swiss government, wanting to prevent an issue over regional favouritism, but perhaps wanting a bit of competition in hopes of lowering prices, split the contract with Victorinox and Wenger, each getting half of the orders placed. By mutual agreement, Wenger has advertised as the Genuine Swiss Army Knife and Victorinox used the slogan, the Original Swiss Army Knife.
On April 26, 2005, Victorinox acquired Wenger, once again becoming the sole supplier of knives to the Military of Switzerland. Victorinox had kept both consumer brands intact, but on January 30, 2013, Wenger and Victorinox announced that the separate knife brands were going to be merged into one brand: Victorinox. Wenger 's watch and licensing business will continue as a separate brand.
Up to 2008 Victorinox AG and Wenger SA supplied about 50,000 knives to the military of Switzerland each year, and manufactured many more for export, mostly to the United States. Many commercial Victorinox and Wenger Swiss Army knives can be immediately distinguished by the cross logos depicted on their grips; the Victorinox cross logo is surrounded by a shield while the Wenger cross logo is surrounded by a slightly rounded square.
On January 30, 2013, Wenger and Victorinox announced that the separate knife brands were going to be merged into one brand: Victorinox. The press release stated that Wenger 's factory in Delemont would continue to produce knives and all employees at this site will retain their jobs. They further elaborated that an assortment of items from the Wenger line - up will remain in production under the Victorinox brand name. Wenger 's US headquarters will be merged with Victorinox 's location in Monroe, Connecticut. Wenger 's watch and licensing business will continue as a separate brand: Swiss Gear.
Many other companies manufacture similar - looking folding knives in a wide range of quality and prices. The cross-and - shield emblem and the words SWISS ARMY are registered trademarks of Victorinox AG and its related companies.
In 2007, the Swiss Government made a request for new updated soldier knives for the Swiss military for distribution in late 2008. The evaluation phase of the new soldier knife began in February 2008, when Armasuisse issued an invitation to tender. A total of seven suppliers from Switzerland and other countries were invited to participate in the evaluation process. Functional models submitted by suppliers underwent practical testing by military personnel in July 2008, while laboratory tests were used to assess compliance with technical requirements. A cost - benefit analysis was conducted and the model with the best price / performance ratio was awarded the contract. The order for 75,000 soldier knives plus cases was worth 1.38 million SFr... This equates to a purchase price of 18.40 SFr., € 12.12, GB £ 17.99 in October 2009 per knife plus case.
Victorinox won the contest with a knife based on the One - Hand Germany Army Knife as issued by the German Bundeswehr and released in the civilian model lineup with the addition of a toothpick and tweezers stored in the nylon grip scales (side cover plates) as the One - Hand Trekker / Trailmaster model. Mass production of the new Soldatenmesser 08 (Soldier Knife 08) for the Swiss Armed Forces was started in December 2008.
There are various models of the Swiss Army Knife with different tool combinations. Though Victorinox does n't provide custom knives, they have produced many variations to suit individual users.
Main tools:
Smaller tools:
Scale tools:
Three Victorinox SAK models featured a butane lighter: the Swissflame, Campflame, and Swisschamp XXLT, first introduced in 2002 and then discontinued in 2005. The models were never sold in the United States due to lack of safety features. They used a standard piezoelectric ignition system for easy and quick ignition with adjustable flame, and were designed for operation at altitudes up to 1,500 metres (4,900 ft) above sea level and continuous operation of 10 minutes.
In January 2010, Victorinox announced the Presentation Master models, released in April 2010. The technological tools included a laser pointer, and detachable flash drive with fingerprint reader. Victorinox now sells an updated version called the Slim Jetsetter, with "a premium software package that provides ultra secure data encryption, automatic backup functionality, secure web surfing capabilities, file and email synchronization between the drive and multiple computers, Bluetooth pairing and much more. On the hardware side of things, biometric fingerprint technology, laser pointers, LED lights, Bluetooth remote control and of course, the original Swiss Army Knife implements -- blade, scissors, nail file, screwdriver, key ring and ballpoint pen are standard. * * Not every feature is available on every model within the collection. ''
In 2006, Wenger produced a knife called "The Giant '' that included every implement the company ever made, with 87 tools and 141 different functions. It was recognized by Guinness World Records as the world 's most multifunctional penknife. It retails for about € 798 or $ US1000, though some vendors charge much higher prices.
In the same year, Victorinox released the SwissChamp XAVT, consisting of 118 parts and 80 functions with a retail price of $425. The Guinness Book of Records recognizes a unique 314 - blade Swiss Army - style knife made in 1991 by Master Cutler Hans Meister as the world 's largest penknife, weighing 11 pounds.
Some Swiss Army knives feature locking blades to prevent accidental closure. Wenger was the first to offer a '' PackLock '' for the main blade on several of their standard 85mm models. Several large Wenger and Victorinox models feature a locking blade secured by a slide lock that is operated with an unlocking - button integrated in the scales. Some Victorinox 111 mm series knives feature a double liner lock that secures the cutting blade and large slotted screwdriver / cap opener / wire stripper combination tool designed towards prying.
Rivets and flanged bushings made from brass hold all machined steel parts and other tools, separators and the scales together. The rivets are made by cutting and pointing appropriately sized bars of solid brass.
The separators between the tools have been made from aluminium alloy since 1951. This makes the knives lighter. Previously these separating layers were made of nickel - silver.
The martensitic stainless steel alloy used for the cutting blades is optimized for high toughness and corrosion resistance and has a composition of 15 % chromium, 0.60 % silicon, 0.52 % carbon, 0.50 % molybdenum, and 0.45 % manganese and is designated X55CrMo14 or DIN 1.4110 according to Victorinox. After a hardening process at 1040 ° C and annealing at 160 ° C the blades achieve an average hardness of 56 HRC. This steel hardness is suitable for practical use and easy resharpening, but less than achieved in stainless steel alloys used for blades optimized for high wear resistance. According to Victorinox the martensitic stainless steel alloy used for the other parts is X39Cr13 (aka DIN 1.4031, AISI / ASTM 420) and for the springs X20Cr13 (aka DIN 1.4021, but still within AISI / ASTM 420).
The steel used for the wood saws, scissors and nail files has a steel hardness of HRC 53, the screwdrivers, tin openers and awls have a hardness of HRC 52, and the corkscrew and springs have a hardness of HRC 49.
The metal saws and files, in addition to the special case hardening, are also subjected to a hard chromium plating process so that iron and steel can also be filed and cut.
Although red Cellulose Acetate Butyrate (CAB) (generally known trade names are Cellidor, Tenite and Tenex) scaled Swiss Army Knives are most common, there are many colors and alternative materials like nylon and aluminum for the scales available. Many textures, colors and shapes now appear in the Swiss Army Knife. Since 2006 the scales on some knife models can have textured rubber non-slip inlays incorporated, intended for sufficient grip with moist or wet hands. A modding community has also developed from professionally - produced custom models combining novel materials, colors, finishes and occasionally new tools such firesteels or tool ' blades ' mounting replaceable surgical scalpel blades to replacement of standard scales (handles) with new versions in natural materials such as buffalo horn. In addition to ' limited edition ' productions runs, numerous examples from basic to professional - level customizations of standard knives - such as retrofitting pocket clips, one - off scales created using 3D printing techniques, decoration using anodization and new scale materials - can be found by searching for ' SAK mods '.
During assembly, all components are placed on several brass rivets. The first components are generally an aluminum separator and a flat steel spring. Once a layer of tools is installed, another separator and spring are placed for the next layer of tools. This process is repeated until all the desired tool layers and the finishing separator are installed. Once the knife is built, the metal parts are fastened by adding brass flanged bushings to the rivets. The excess length of the rivets is then cut off to make them flush with the bushings. Finally the remaining length of the rivets is flattened into the flanged bushings.
After the assembly of the metal parts, the blades are sharpened to a 15 ° angle, resulting in a 30 ° V - shaped steel cutting edge. The blades are then checked with a laser reflecting goniometer to verify the angle of the cutting edges.
Finally scales are applied. Slightly undersized holes incorporated into the inner surface enclose the bushings, which have truncated cone cross-section and are slightly undercut, forming a one - way interference fit when pressed into the generally softer and more elastic scale material. The result is a tight adhesive - free connection that nonetheless permits new identical - pattern scales to be quickly and easily applied.
Victorinox models are available in 58 mm (2.3 in), 74 mm (2.9 in), 84 mm (3.3 in), 91 mm (3.6 in), 93 mm (3.7 in), 100 mm (3.9 in), 108 mm (4.3 in) and 111 mm (4.4 in) lengths when closed. The thickness of the knives varies depending on the number of tool layers included. The 91 mm (3.6 in) models offer the most variety in tool configurations in the Victorinox model line with as many as 15 layers.
Wenger models are available in 65 mm (2.6 in), 75 mm (3.0 in), 85 mm (3.3 in) 93 mm (3.7 in), 100 mm (3.9 in), 120 mm (4.7 in) and 130 mm (5.1 in) lengths when closed. Thickness varies depending on the number of tool layers included. The 85 mm (3.3 in) models offer the most variety in tool configurations in the Wenger model line, with as many as 10 layers.
Since the first issue as personal equipment in 1891 the Soldatenmesser (Soldier Knives) issued by the Swiss Armed Forces have been revised several times. There are five different main Modelle (models). Their model numbers refer to the year of introduction in the military supply chain. Several main models have been revised over time and therefore exist in different Ausführungen (executions), also denoted by the year of introduction. The issued models of the Swiss Armed Forces are:
Soldier Knives are issued to every recruit or member of the Swiss Armed Forces and the knives issued to officers have never differed from those issued to non-commissioned officers or privates. A model incorporating a corkscrew and scissors was produced as an officer 's tool, but was deemed not "essential for survival '', leaving officers to purchase it individually.
The Soldier Knife model 1890 had a spear point blade, reamer, can - opener, screwdriver and grips made out of oak wood scales (handles) that were treated with rapeseed oil for greater toughness and water - repellency, which made them black in color. The wooden grips of the Modell 1890 tended to crack and chip so in 1901 these were changed to a hard reddish - brown fiber similar in appearance to wood. The knife was 100 mm (3.9 in) long, 20.5 mm (0.81 in) thick and weighed 144 g (5.1 oz).
The Soldier Knife model 1908 had a clip - point blade rather than the 1890s spear point blade, still with the fiber scales, carbon steel tools, nickel - silver bolster, liners, and divider. The knife was 100 mm (3.9 in) long, 16.5 mm (0.65 in) thick and weighed 125 g (4.4 oz). The contract with the Swiss Army split production equally between the Victorinox and Wenger companies.
The soldier Knife model 1951 had fiber scales, nickel - silver bolsters, liners, and divider, and a spear point blade. This was the first Swiss Armed Forces issue model where the tools were made of stainless steel. The screwdriver now had a scraper arc on one edge. The knife was 93 mm (3.7 in) long, 13.5 mm (0.53 in) thick and weighed 90 g (3.2 oz).
The Soldier Knife model 1961 has a 93 mm (3.7 in) long knurled alox handle with the Swiss crest, a drop point blade, a reamer, a blade combining bottle opener, screwdriver, and wire stripper, and a combined can - opener and small screwdriver. The knife was 12 mm (0.47 in) thick and weighed 72 g (2.5 oz)
This official Swiss military model also contains a brass spacer, which allows the knife, with the screwdriver and the reamer extended simultaneously, to be used to assemble the SIG 550 and SIG 510 assault rifles: the knife serves as a restraint to the firing pin during assembly of the lock. The Soldier Knife model 1961 was manufactured only by Victorinox and Wenger and was the first issued knife bearing the Swiss Coat of Arms on the handle.
The Soldier Knife 08 was first issued to the Swiss Armed Forces beginning with the first basic training sessions of 2009.
The Soldier Knife 08 features an 111 mm (4.4 in) long ergonomic handle with polymer - textured non-slip inlays incorporated in the nylon grip shells and a double liner locking system, one - hand 86 mm (3.4 in) long locking partly serrated chisel ground drop point blade, wood saw, can opener with small 3 mm (0.12 in) slotted screwdriver, locking bottle opener with large 7 mm (0.28 in) slotted screwdriver and wire stripper / bender, reamer, Phillips (PH2) screwdriver and 12 mm (0.47 in) diameter split keyring. The Soldier Knife 08 width is 34.5 mm (1.36 in), thickness is 18 mm (0.71 in), overall length opened is 197 mm (7.8 in) and it weighs 131 g (4.6 oz). The Soldier Knife 08 is manufactured only by Victorinox.
The armed forces of more than 20 different nations have issued or approved the use of various versions of Swiss army knives made by Victorinox, among them the forces of Germany, France, the Netherlands, Norway, Malaysia and the United States (NSN 1095 - 01 - 653 - 1166 Knife, Combat).
West German Army knife, 1985
New German Army knife, since 2003
Malaysian Army knife
The Swiss Army knife has been present in space missions carried out by NASA since the late 1970s. In 1978, NASA sent a letter of confirmation to Victorinox regarding a purchase of 50 knives known as the Master Craftsman model. In 1985, Edward M. Payton, brother of astronaut Gary E. Payton, sent a letter to Victorinox, asking about getting a Master Craftsman knife after seeing the one his brother used in space. There are other stories as well of repairs conducted in space using a Swiss Army knife.
The Swiss Army knife has been added to the collection of the New York Museum of Modern Art and Munich 's State Museum of Applied Art for its design. The term "Swiss Army '' currently is a registered trademark owned by Victorinox AG and its subsidiary, Wenger SA.
The television show MacGyver features Angus MacGyver, who frequently uses different Swiss Army knives in various episodes to solve problems and construct simple objects.
In the TV show Psych Shawn Spencer and his dad always carry large Swiss Army knives to work their way through occasional problems.
The term "Swiss Army knife '' has entered popular culture as a metaphor for usefulness and adaptability. The multi-purpose nature of the tool has also inspired a number of other gadgets.
Of these books, regularly available at this time (as for example on the Amazon web site) are Jackson 's Collector 's Companion, Lubkemann 's Whittling Book, and Young 's Owner 's Manual. Some of the others occasionally turn up on Amazon and auction sites like eBay.
|
who played the corpse in the movie the big chill | The Big Chill (film) - wikipedia
The Big Chill is a 1983 American comedy - drama film directed by Lawrence Kasdan, starring Tom Berenger, Glenn Close, Jeff Goldblum, William Hurt, Kevin Kline, Mary Kay Place, Meg Tilly, and JoBeth Williams. The plot focuses on a group of baby boomers who attended the University of Michigan, reuniting after 15 years when their friend Alex commits suicide. Kevin Costner was cast as Alex, but all scenes showing his face were cut.
It was filmed in Beaufort, South Carolina.
The soundtrack features soul, R&B, and pop - rock music from the 1960s and ' 70s, including tracks by Creedence Clearwater Revival, Aretha Franklin, Marvin Gaye, The Temptations, The Rolling Stones, and Three Dog Night.
The Big Chill was adapted for television as the short - lived 1985 CBS series Hometown. Later, it influenced the TV series thirtysomething.
Harold Cooper (Kevin Kline) is bathing his young son when his wife, Sarah (Glenn Close), receives a phone call at their Richmond home telling her that their friend, Alex (Kevin Costner), has committed suicide by slashing his wrists in the bathtub of their vacation house in South Carolina, where he had been staying.
At the funeral, Harold and Sarah are reunited with college friends from the University of Michigan. They include Sam (Tom Berenger), a famous television actor now living in Los Angeles; Meg (Mary Kay Place), a chain smoking former public defender who is now a real estate attorney in Atlanta and wants a child; Michael (Jeff Goldblum), a sex - obsessed People magazine journalist; Nick (William Hurt), a Vietnam War veteran and former radio host who suffers from impotence; Karen (JoBeth Williams), a housewife from suburban Detroit who 's unhappy in her marriage to her advertising executive husband, Richard (Don Galloway). Also present is Chloe (Meg Tilly), Alex 's much younger girlfriend.
After the burial, everyone goes from the cemetery to Harold and Sarah 's vacation house, where they are invited to stay for the weekend. During the first night there, a bat flies into the attic while Meg and Nick are getting reacquainted. Sam later finds Nick watching television, and they briefly talk about Karen. The two then go into the kitchen and find Richard making a sandwich, and the three make small talk which turns into a discussion about responsibility and adulthood. At the end of the discussion, Richard states, "Nobody said it was going to be fun. At least, nobody said it to me. ''
The next morning Harold and Nick go jogging. Harold tells Nick that his running shoe company is about to be bought out by a large corporation, and that he 's about to become rich. Harold confides in Nick that Sarah and Alex had an affair five years earlier. Nick comforts Harold by saying, "She did n't marry Alex. ''
Richard returns home to look after his kids, but Karen decides to stay in South Carolina for the weekend. Nick, Harold, Michael and Chloe go for a drive (while "Good Lovin ' '' by the Rascals plays on the car radio), while Sam and Karen go shopping. Meg reveals to Sarah that she wants to have a child, and that she is going to ask Sam to be the father, knowing now that Nick ca n't. Out in the countryside, Harold listens to Michael 's plans to buy a nightclub. Chloe takes Nick to the abandoned house that she and Alex were going to renovate. She tells him that he reminds her of Alex, to which Nick replies, "I ai n't him. ''
During dinner, Sarah starts tearing up over Alex as the group talks about him. Harold puts "Ai n't Too Proud to Beg '' by the Temptations on the stereo, and everyone dances while cleaning up the dishes. While the others sit around and smoke marijuana, Meg asks Sam to father her baby, but he declines.
The next morning Nick, Sam and Harold go jogging, and the subject of Alex 's suicide comes up again. Harold 's surprise arrives: sneakers for everyone to wear during the upcoming Michigan football game. The group, minus Nick, watches the game on TV, while Sarah tells Karen about her brief affair with Alex and how it affected their friendship negatively.
During the game, Michael offers to father Meg 's child, alluding to the fact that they had sex in college during the March on Washington. At halftime, Chloe, Sam, Harold and Michael go outside to play touch football. Nick returns, with a police car following him. The officer says that Nick ran a red light and was belligerent, but says that he will drop the charges if Sam would hop into Nick 's Porsche as his TV character, J.T. Lancer, always does. Sam is unsuccessful and hurts himself, but the officer drops the charges anyway and apologizes to Harold.
Karen later tells Sam that she loves him, wants to leave Richard and live with Sam and her two sons. When they kiss, Sam pulls away and tells Karen not to leave Richard, as she will regret it in the long run. He confesses that it was "boredom '' that caused his own marriage to fail, and he does n't want her to make the same mistake. Karen feels misled and angrily storms into the house.
Harold is on the phone with his daughter, Molly, and lets Meg talk to her. Observing their interaction on the phone, Sarah decides to let Harold impregnate Meg, but does not tell him yet.
The group once again discusses Alex. Nick says, "Alex died for most of us a long time ago, '' but Sam disagrees and leaves. Karen follows him, and the two have sex outside. Sarah tells Harold about Meg 's situation, while Chloe and Nick go to bed together, even though he warns her of his condition. Meg and Harold then have sex -- she says "I feel like I got a great break on a used car '' -- while Michael and Sarah joke around and interview each other with a video camera.
In the morning while Karen is packing her clothes, she subtly tells Sam that she has decided to stay with Richard. At the breakfast table, Harold reveals that Nick and Chloe will be staying in the guest house for a while so they can renovate the old abandoned house. Sam and Nick then make up from their argument the night before. Nick gives Michael an old clipping of an article he had written about Alex, which Alex had saved. At the end of the movie, Michael states, tongue in cheek, "Sarah, Harold. We took a secret vote. We 're not leaving. We 're never leaving. '' They all laugh and "Joy to the World '' plays as the credits roll.
Richard Corliss of Time described The Big Chill as a "funny and ferociously smart movie, '' stating:
These Americans are in their 30s today, but back then they were the Now Generation. Right Now: give me peace, give me justice, gim me good lovin '. For them, in the voluptuous bloom of youth, the ' 60s was a banner you could carry aloft or wrap yourself inside. A verdant anarchy of politics, sex, drugs and style carpeted the landscape. And each impulse was scored to the rollick of the new music: folk, rock, pop, R&B. The armies of the night marched to Washington, but they boogied to Liverpool and Motown. Now, in 1983, Harold & Sarah & Sam & Karen & Michael & Meg & Nick -- classmates all from the University of Michigan at the end of our last interesting decade -- have come to the funeral of a friend who has slashed his wrists. Alex was a charismatic prodigy of science and friendship and progressive hell raising who opted out of academe to try social work, then manual labor, then suicide. He is presented as a victim of terminal decompression from the orbital flight of his college years: a worst - case scenario his friends must ponder, probing themselves for symptoms of the disease.
Vincent Canby of The New York Times argued that the film is a "very accomplished, serious comedy '' and an "unusually good choice to open this year 's (New York Film Festival) in that it represents the best of mainstream American film making. ''
Roger Ebert gave the film two and a half stars out of four and said, "The Big Chill is a splendid technical exercise. It has all the right moves. It knows all the right words. Its characters have all the right clothes, expressions, fears, lusts and ambitions. But there 's no payoff and it does n't lead anywhere. I thought at first that was a weakness of the movie. There also is the possibility that it 's the movie 's message. ''
The Big Chill won two major awards:
It was nominated for three Oscars:
Other nominations include:
In 2004 "Ai n't Too Proud to Beg '' finished # 94 in AFI 's 100 Years... 100 Songs poll.
The film was parodied by T. Coraghessan Boyle in his short story The Little Chill. The story begins, "Hal had known Rob and Irene, Jill, Harvey, Tottle, and Pesky since elementary school, and they were all 40 going on 60. ''
Ten of the songs from the film were released on the soundtrack album, with four additional songs made available on the CD. The remainder of the film 's songs (aside from the Rolling Stones ' "You Ca n't Always Get What You Want '') were released in 1984 on a second soundtrack album.
In 1998, both albums were re-mastered, the first without the four additional CD tracks, which had also appeared on More Songs and were left there. In 2004, Hip - O Records released a Deluxe edition, containing not only sixteen of the eighteen songs from the film ("Quicksilver Girl, '' by The Steve Miller Band, was unavailable), but three additional film instrumentals. A second "music of a generation '' disc of nineteen additional tracks was included as well, some of which had appeared both on the original soundtrack and the More Songs release.
|
most td passes in a college football season | List of NCAA Football Records - wikipedia
This is a list of individual National Collegiate Athletic Association (NCAA) American football records, including Division I (FBS, and FCS), II, and III.
The NCAA lists two different records for team interceptions in a game. The listed record is for "Most passes intercepted by against a major - college opponent ''. The unrestricted "Most passes intercepted by '' is held by Brown, with 11, in a game versus Rhode Island, Oct. 8, 1949.
The NCAA record book includes a special note about 6 interceptions by Dick Miller (Akron) versus Baldwin - Wallace on Oct. 23, 1937 before the collection of division records.
Tulane University lists 5 interceptions by Mitchell Price in a game versus Tennessee -- Chattanooga September 3, 1988 which is not recognized as an official statistic by the NCAA.
Since the 1960 season
Mike Singletary (Baylor) Recorded 232 tackles in 1978 but the NCAA did not begin collecting defensive statistics until 2000
Joe Norman (Indiana) recorded 199 tackles in 11 games in 1978 for an 18.09 average, but the NCAA did not begin collecting defensive statistics until 2000.
Joe Norman (Indiana) recorded 141 solo tackles in 1978, but the NCAA did not begin collecting defensive statistics until 2000.
Joe Norman (Indiana) recorded 141 solo tackles in 11 games in 1978 for a 12.81 average, but the NCAA did not begin collecting defensive statistics until 2000.
Derrick Thomas (Alabama) and Tedy Bruschi (Arizona) each recorded 52 sacks, but the NCAA did not start collecting official defensive statistics until 2000.
Derrick Thomas (Alabama) recorded 27 sacks in 1988, but the NCAA did not start collecting official defensive statistics until 2000.
Shay Muirbrook (BYU) recorded 6 sacks in the 1997 Cotton Bowl, but the NCAA did not start collecting official defensive statistics until 2000 and does not recognize bowl game statistics for any category prior to 2002.
Minimum of 1.2 returns per game
Minimum of 1.2 returns per game
Minimum 1.2 returns per game
Minimum 1.2 returns per game
Minimum one punt return and one kickoff return
Note: The longest field goal ever made in collegiate competition was 69 yards by Ove Johansson of Abilene Christian University, which at the time (1976) was competing as an NAIA, not an NCAA, school.
|
who is the minister of transport in malawi | Cabinet of Malawi - Wikipedia
The Cabinet of Malawi is the executive branch of the government, made up of the President, Vice President, Ministers and Deputy Ministers responsible for the different departments.
|
where does the pacific ocean begin and end | Pacific Ocean - Wikipedia
The Pacific Ocean is the largest and deepest of Earth 's oceanic divisions. It extends from the Arctic Ocean in the north to the Southern Ocean (or, depending on definition, to Antarctica) in the south and is bounded by Asia and Australia in the west and the Americas in the east.
At 165,250,000 square kilometers (63,800,000 square miles) in area (as defined with an Antarctic southern border), this largest division of the World Ocean -- and, in turn, the hydrosphere -- covers about 46 % of Earth 's water surface and about one - third of its total surface area, making it larger than all of Earth 's land area combined. Both the center of the Water Hemisphere and the Western Hemisphere are in the Pacific Ocean. The equator subdivides it into the North Pacific Ocean and South Pacific Ocean, with two exceptions: the Galápagos and Gilbert Islands, while straddling the equator, are deemed wholly within the South Pacific. Its mean depth is 4,280 meters (14,040 feet). The Mariana Trench in the western North Pacific is the deepest point in the world, reaching a depth of 10,911 meters (35,797 feet). The western Pacific has many peripheral seas.
Though the peoples of Asia and Oceania have traveled the Pacific Ocean since prehistoric times, the eastern Pacific was first sighted by Europeans in the early 16th century when Spanish explorer Vasco Núñez de Balboa crossed the Isthmus of Panama in 1513 and discovered the great "southern sea '' which he named Mar del Sur (in Spanish). The ocean 's current name was coined by Portuguese explorer Ferdinand Magellan during the Spanish circumnavigation of the world in 1521, as he encountered favorable winds on reaching the ocean. He called it Mar Pacífico, which in both Portuguese and Spanish means "peaceful sea ''.
Important human migrations occurred in the Pacific in prehistoric times. About 3000 BC, the Austronesian peoples on the island of Taiwan mastered the art of long - distance canoe travel and spread themselves and their languages south to the Philippines, Indonesia, and maritime Southeast Asia; west towards Madagascar; southeast towards New Guinea and Melanesia (intermarrying with native Papuans); and east to the islands of Micronesia, Oceania and Polynesia.
Long - distance trade developed all along the coast from Mozambique to Japan. Trade, and therefore knowledge, extended to the Indonesian islands but apparently not Australia. By at least 878 when there was a significant Islamic settlement in Canton much of this trade was controlled by Arabs or Muslims. In 219 BC Xu Fu sailed out into the Pacific searching for the elixir of immortality. From 1404 to 1433 Zheng He led expeditions into the Indian Ocean.
The first contact of European navigators with the western edge of the Pacific Ocean was made by the Portuguese expeditions of António de Abreu and Francisco Serrão, via the Lesser Sunda Islands, to the Maluku Islands, in 1512, and with Jorge Álvares 's expedition to southern China in 1513, both ordered by Afonso de Albuquerque from Malacca.
The east side of the ocean was discovered by Spanish explorer Vasco Núñez de Balboa in 1513 after his expedition crossed the Isthmus of Panama and reached a new ocean. He named it Mar del Sur (literally, "Sea of the South '' or "South Sea '') because the ocean was to the south of the coast of the isthmus where he first observed the Pacific.
Later, Portuguese explorer Ferdinand Magellan sailed the Pacific East to West on a Castilian (Spanish) expedition of world circumnavigation starting in 1519. Magellan called the ocean Pacífico (or "Pacific '' meaning, "peaceful '') because, after sailing through the stormy seas off Cape Horn, the expedition found calm waters. The ocean was often called the Sea of Magellan in his honor until the eighteenth century. Although Magellan himself died in the Philippines in 1521, Spanish Basque navigator Juan Sebastián Elcano led the expedition back to Spain across the Indian Ocean and round the Cape of Good Hope, completing the first world circumnavigation in a single expedition in 1522. Sailing around and east of the Moluccas, between 1525 and 1527, Portuguese expeditions discovered the Caroline Islands, the Aru Islands, and Papua New Guinea. In 1542 -- 43 the Portuguese also reached Japan.
In 1564, five Spanish ships consisting of 379 explorers crossed the ocean from Mexico led by Miguel López de Legazpi and sailed to the Philippines and Mariana Islands. For the remainder of the 16th century, Spanish influence was paramount, with ships sailing from Mexico and Peru across the Pacific Ocean to the Philippines, via Guam, and establishing the Spanish East Indies. The Manila galleons operated for two and a half centuries linking Manila and Acapulco, in one of the longest trade routes in history. Spanish expeditions also discovered Tuvalu, the Marquesas, the Cook Islands, the Solomon Islands, and the Admiralty Islands in the South Pacific.
Later, in the quest for Terra Australis (i.e., "the (great) Southern Land ''), Spanish explorations in the 17th century, such as the expedition led by the Portuguese navigator Pedro Fernandes de Queirós, discovered the Pitcairn and Vanuatu archipelagos, and sailed the Torres Strait between Australia and New Guinea, named after navigator Luís Vaz de Torres. Dutch explorers, sailing around southern Africa, also engaged in discovery and trade; Willem Janszoon, made the first completely documented European landing in Australia (1606), in Cape York Peninsula, and Abel Janszoon Tasman circumnavigated and landed on parts of the Australian continental coast and discovered Tasmania and New Zealand in 1642.
In the 16th and 17th century Spain considered the Pacific Ocean a Mare clausum -- a sea closed to other naval powers. As the only known entrance from the Atlantic the Strait of Magellan was at times patrolled by fleets sent to prevent entrance of non-Spanish ships. On the western end of the Pacific Ocean the Dutch threatened the Spanish Philippines.
The 18th century marked the beginning of major exploration by the Russians in Alaska and the Aleutian Islands, such as the First Kamchatka expedition and the Great Northern Expedition, led by the Danish Russian navy officer Vitus Bering. Spain also sent expeditions to the Pacific Northwest reaching Vancouver Island in southern Canada, and Alaska. The French explored and settled Polynesia, and the British made three voyages with James Cook to the South Pacific and Australia, Hawaii, and the North American Pacific Northwest. In 1768, Pierre - Antoine Véron, a young astronomer accompanying Louis Antoine de Bougainville on his voyage of exploration, established the width of the Pacific with precision for the first time in history. One of the earliest voyages of scientific exploration was organized by Spain in the Malaspina Expedition of 1789 -- 1794. It sailed vast areas of the Pacific, from Cape Horn to Alaska, Guam and the Philippines, New Zealand, Australia, and the South Pacific.
Growing imperialism during the 19th century resulted in the occupation of much of Oceania by other European powers, and later, Japan and the United States. Significant contributions to oceanographic knowledge were made by the voyages of HMS Beagle in the 1830s, with Charles Darwin aboard; HMS Challenger during the 1870s; the USS Tuscarora (1873 -- 76); and the German Gazelle (1874 -- 76).
In Oceania, France got a leading position as imperial power after making Tahiti and New Caledonia protectorates in 1842 and 1853 respectively. After navy visits to Easter Island in 1875 and 1887, Chilean navy officer Policarpo Toro managed to negotiate an incorporation of the island into Chile with native Rapanui in 1888. By occupying Easter Island, Chile joined the imperial nations. By 1900 nearly all Pacific islands were in control of Britain, France, United States, Germany, Japan, and Chile.
Although the United States gained control of Guam and the Philippines from Spain in 1898, Japan controlled most of the western Pacific by 1914 and occupied many other islands during World War II. However, by the end of that war, Japan was defeated and the U.S. Pacific Fleet was the virtual master of the ocean. Since the end of World War II, many former colonies in the Pacific have become independent states.
The Pacific separates Asia and Australia from the Americas. It may be further subdivided by the equator into northern (North Pacific) and southern (South Pacific) portions. It extends from the Antarctic region in the South to the Arctic in the north. The Pacific Ocean encompasses approximately one - third of the Earth 's surface, having an area of 165,200,000 km (63,800,000 sq mi) -- significantly larger than Earth 's entire landmass of some 150,000,000 km (58,000,000 sq mi).
Extending approximately 15,500 km (9,600 mi) from the Bering Sea in the Arctic to the northern extent of the circumpolar Southern Ocean at 60 ° S (older definitions extend it to Antarctica 's Ross Sea), the Pacific reaches its greatest east - west width at about 5 ° N latitude, where it stretches approximately 19,800 km (12,300 mi) from Indonesia to the coast of Colombia -- halfway around the world, and more than five times the diameter of the Moon. The lowest known point on Earth -- the Mariana Trench -- lies 10,911 m (35,797 ft; 5,966 fathoms) below sea level. Its average depth is 4,280 m (14,040 ft; 2,340 fathoms), putting the total water volume at roughly 710,000,000 km (170,000,000 cu mi).
Due to the effects of plate tectonics, the Pacific Ocean is currently shrinking by roughly 2.5 cm (1 in) per year on three sides, roughly averaging 0.52 km (0.20 sq mi) a year. By contrast, the Atlantic Ocean is increasing in size.
Along the Pacific Ocean 's irregular western margins lie many seas, the largest of which are the Celebes Sea, Coral Sea, East China Sea, Philippine Sea, Sea of Japan, South China Sea, Sulu Sea, Tasman Sea, and Yellow Sea. The Indonesian Seaway (including the Strait of Malacca and Torres Strait) joins the Pacific and the Indian Ocean to the west, and Drake Passage and the Strait of Magellan link the Pacific with the Atlantic Ocean on the east. To the north, the Bering Strait connects the Pacific with the Arctic Ocean.
As the Pacific straddles the 180th meridian, the West Pacific (or western Pacific, near Asia) is in the Eastern Hemisphere, while the East Pacific (or eastern Pacific, near the Americas) is in the Western Hemisphere.
The Southern Pacific Ocean harbors the Southeast Indian Ridge crossing from south of Australia turning into the Pacific - Antarctic Ridge (north of the South Pole) and merges with another ridge (south of South America) to form the East Pacific Rise which also connects with another ridge (south of North America) which overlooks the Juan de Fuca Ridge.
For most of Magellan 's voyage from the Strait of Magellan to the Philippines, the explorer indeed found the ocean peaceful. However, the Pacific is not always peaceful. Many tropical storms batter the islands of the Pacific. The lands around the Pacific Rim are full of volcanoes and often affected by earthquakes. Tsunamis, caused by underwater earthquakes, have devastated many islands and in some cases destroyed entire towns.
The Martin Waldseemüller map of 1507 was the first to show the Americas separating two distinct oceans. Later, the Diogo Ribeiro map of 1529 was the first to show the Pacific at about its proper size.
The status of Taiwan and China is disputed. For more information, see political status of Taiwan.
This ocean has most of the islands in the world. There are about 25,000 islands in the Pacific Ocean. The islands entirely within the Pacific Ocean can be divided into three main groups known as Micronesia, Melanesia and Polynesia. Micronesia, which lies north of the equator and west of the International Date Line, includes the Mariana Islands in the northwest, the Caroline Islands in the center, the Marshall Islands to the west and the islands of Kiribati in the southeast.
Melanesia, to the southwest, includes New Guinea, the world 's second largest island after Greenland and by far the largest of the Pacific islands. The other main Melanesian groups from north to south are the Bismarck Archipelago, the Solomon Islands, Santa Cruz, Vanuatu, Fiji and New Caledonia.
The largest area, Polynesia, stretching from Hawaii in the north to New Zealand in the south, also encompasses Tuvalu, Tokelau, Samoa, Tonga and the Kermadec Islands to the west, the Cook Islands, Society Islands and Austral Islands in the center, and the Marquesas Islands, Tuamotu, Mangareva Islands, and Easter Island to the east.
Islands in the Pacific Ocean are of four basic types: continental islands, high islands, coral reefs and uplifted coral platforms. Continental islands lie outside the andesite line and include New Guinea, the islands of New Zealand, and the Philippines. Some of these islands are structurally associated with nearby continents. High islands are of volcanic origin, and many contain active volcanoes. Among these are Bougainville, Hawaii, and the Solomon Islands.
The coral reefs of the South Pacific are low - lying structures that have built up on basaltic lava flows under the ocean 's surface. One of the most dramatic is the Great Barrier Reef off northeastern Australia with chains of reef patches. A second island type formed of coral is the uplifted coral platform, which is usually slightly larger than the low coral islands. Examples include Banaba (formerly Ocean Island) and Makatea in the Tuamotu group of French Polynesia.
Pacific Ocean up sun from the rocks
Point Reyes headlands, Point Reyes National Seashore, California
Tahuna maru islet, French Polynesia
Los Molinos on the coast of Southern Chile
The volume of the Pacific Ocean, representing about 50.1 percent of the world 's oceanic water, has been estimated at some 714 million cubic kilometers (171 million cubic miles). Surface water temperatures in the Pacific can vary from − 1.4 ° C (29.5 ° F), the freezing point of sea water, in the poleward areas to about 30 ° C (86 ° F) near the equator. Salinity also varies latitudinally, reaching a maximum of 37 parts per thousand in the southeastern area. The water near the equator, which can have a salinity as low as 34 parts per thousand, is less salty than that found in the mid-latitudes because of abundant equatorial precipitation throughout the year. The lowest counts of less than 32 parts per thousand are found in the far north as less evaporation of seawater takes place in these frigid areas. The motion of Pacific waters is generally clockwise in the Northern Hemisphere (the North Pacific gyre) and counter-clockwise in the Southern Hemisphere. The North Equatorial Current, driven westward along latitude 15 ° N by the trade winds, turns north near the Philippines to become the warm Japan or Kuroshio Current.
Turning eastward at about 45 ° N, the Kuroshio forks and some water moves northward as the Aleutian Current, while the rest turns southward to rejoin the North Equatorial Current. The Aleutian Current branches as it approaches North America and forms the base of a counter-clockwise circulation in the Bering Sea. Its southern arm becomes the chilled slow, south - flowing California Current. The South Equatorial Current, flowing west along the equator, swings southward east of New Guinea, turns east at about 50 ° S, and joins the main westerly circulation of the South Pacific, which includes the Earth - circling Antarctic Circumpolar Current. As it approaches the Chilean coast, the South Equatorial Current divides; one branch flows around Cape Horn and the other turns north to form the Peru or Humboldt Current.
The climate patterns of the Northern and Southern Hemispheres generally mirror each other. The trade winds in the southern and eastern Pacific are remarkably steady while conditions in the North Pacific are far more varied with, for example, cold winter temperatures on the east coast of Russia contrasting with the milder weather off British Columbia during the winter months due to the preferred flow of ocean currents.
In the tropical and subtropical Pacific, the El Niño Southern Oscillation (ENSO) affects weather conditions. To determine the phase of ENSO, the most recent three - month sea surface temperature average for the area approximately 3,000 km (1,900 mi) to the southeast of Hawaii is computed, and if the region is more than 0.5 ° C (0.9 ° F) above or below normal for that period, then an El Niño or La Niña is considered in progress.
In the tropical western Pacific, the monsoon and the related wet season during the summer months contrast with dry winds in the winter which blow over the ocean from the Asian landmass. Worldwide, tropical cyclone activity peaks in late summer, when the difference between temperatures aloft and sea surface temperatures is the greatest. However, each particular basin has its own seasonal patterns. On a worldwide scale, May is the least active month, while September is the most active month. November is the only month in which all the tropical cyclone basins are active. The Pacific hosts the two most active tropical cyclone basins, which are the northwestern Pacific and the eastern Pacific. Pacific hurricanes form south of Mexico, sometimes striking the western Mexican coast and occasionally the southwestern United States between June and October, while typhoons forming in the northwestern Pacific moving into southeast and east Asia from May to December. Tropical cyclones also form in the South Pacific basin, where they occasionally impact island nations.
In the arctic, icing from October to May can present a hazard for shipping while persistent fog occurs from June to December. A climatological low in the Gulf of Alaska keeps the southern coast wet and mild during the winter months. The Westerlies and associated jet stream within the Mid-Latitudes can be particularly strong, especially in the Southern Hemisphere, due to the temperature difference between the tropics and Antarctica, which records the coldest temperature readings on the planet. In the Southern hemisphere, because of the stormy and cloudy conditions associated with extratropical cyclones riding the jet stream, it is usual to refer to the Westerlies as the Roaring Forties, Furious Fifties and Shrieking Sixties according to the varying degrees of latitude.
The ocean was first mapped by Abraham Ortelius; he called it Maris Pacifici following Ferdinand Magellan 's description of it as "a pacific sea '' during his circumnavigation from 1519 to 1522. To Magellan, it seemed much more calm (pacific) than the Atlantic.
The andesite line is the most significant regional distinction in the Pacific. A petrologic boundary, it separates the deeper, mafic igneous rock of the Central Pacific Basin from the partially submerged continental areas of felsic igneous rock on its margins. The andesite line follows the western edge of the islands off California and passes south of the Aleutian arc, along the eastern edge of the Kamchatka Peninsula, the Kuril Islands, Japan, the Mariana Islands, the Solomon Islands, and New Zealand 's North Island.
The dissimilarity continues northeastward along the western edge of the Andes Cordillera along South America to Mexico, returning then to the islands off California. Indonesia, the Philippines, Japan, New Guinea, and New Zealand lie outside the andesite line.
Within the closed loop of the andesite line are most of the deep troughs, submerged volcanic mountains, and oceanic volcanic islands that characterize the Pacific basin. Here basaltic lavas gently flow out of rifts to build huge dome - shaped volcanic mountains whose eroded summits form island arcs, chains, and clusters. Outside the andesite line, volcanism is of the explosive type, and the Pacific Ring of Fire is the world 's foremost belt of explosive volcanism. The Ring of Fire is named after the several hundred active volcanoes that sit above the various subduction zones.
The Pacific Ocean is the only ocean which is almost totally bounded by subduction zones. Only the Antarctic and Australian coasts have no nearby subduction zones.
The Pacific Ocean was born 750 million years ago at the breakup of Rodinia, although it is generally called the Panthalassic Ocean until the breakup of Pangea, about 200 million years ago. The oldest Pacific Ocean floor is only around 180 Ma old, with older crust subducted by now.
The Pacific Ocean contains several long seamount chains, formed by hotspot volcanism. These include the Hawaiian -- Emperor seamount chain and the Louisville Ridge.
The exploitation of the Pacific 's mineral wealth is hampered by the ocean 's great depths. In shallow waters of the continental shelves off the coasts of Australia and New Zealand, petroleum and natural gas are extracted, and pearls are harvested along the coasts of Australia, Japan, Papua New Guinea, Nicaragua, Panama, and the Philippines, although in sharply declining volume in some cases.
Fish are an important economic asset in the Pacific. The shallower shoreline waters of the continents and the more temperate islands yield herring, salmon, sardines, snapper, swordfish, and tuna, as well as shellfish. Overfishing has become a serious problem in some areas. For example, catches in the rich fishing grounds of the Okhotsk Sea off the Russian coast have been reduced by at least half since the 1990s as a result of overfishing.
The quantity of small plastic fragments floating in the north - east Pacific Ocean increased a hundredfold between 1972 and 2012.
Marine pollution is a generic term for the harmful entry into the ocean of chemicals or particles. The main culprits are those using the rivers for disposing of their waste. The rivers then empty into the ocean, often also bringing chemicals used as fertilizers in agriculture. The excess of oxygen - depleting chemicals in the water leads to hypoxia and the creation of a dead zone.
Marine debris, also known as marine litter, is human - created waste that has ended up floating in a lake, sea, ocean, or waterway. Oceanic debris tends to accumulate at the center of gyres and coastlines, frequently washing aground where it is known as beach litter.
In addition, the Pacific Ocean has served as the crash site of satellites, including Mars 96, Fobos - Grunt, and Upper Atmosphere Research Satellite.
|
space around the cities in the early historic period | History of the city - wikipedia
Towns and cities have a long history, although opinions vary on which ancient settlement are truly cities. The benefits of dense settlement included reduced transport costs, exchange of ideas, sharing of natural resources, large local markets, and in some cases amenities such as running water and sewage disposal. Possible costs would include higher rate of crime, higher mortality rates, higher cost of living, worse pollution, traffic and high commuting times. Cities grow when the benefits of proximity between people and firms are higher than the cost.
There is not enough evidence to assert what conditions gave rise to the first cities. Some theorists have speculated on what they consider suitable pre-conditions and basic mechanisms that might have been important driving forces.
The conventional view holds that cities first formed after the Neolithic revolution. The Neolithic revolution brought agriculture, which made denser human populations possible, thereby supporting city development. The advent of farming encouraged hunter - gatherers to abandon nomadic lifestyles and to settle near others who lived by agricultural production. The increased population density encouraged by farming and the increased output of food per unit of land created conditions that seem more suitable for city - like activities. In his book, Cities and Economic Development, Paul Bairoch takes up this position in his argument that agricultural activity appears necessary before true cities can form.
According to Vere Gordon Childe, for a settlement to qualify as a city, it must have enough surplus of raw materials to support trade and a relatively large population. Bairoch points out that, due to sparse population densities that would have persisted in pre-Neolithic, hunter - gatherer societies, the amount of land that would be required to produce enough food for subsistence and trade for a large population would make it impossible to control the flow of trade. To illustrate this point, Bairoch offers an example: "Western Europe during the pre-Neolithic, (where) the density must have been less than 0.1 person per square kilometre ''. Using this population density as a base for calculation, and allotting 10 % of food towards surplus for trade and assuming that city dwellers do no farming, he calculates that "... to maintain a city with a population of 1,000, and without taking the cost of transport into account, an area of 100,000 square kilometres would have been required. When the cost of transport is taken into account, the figure rises to 200,000 square kilometres... ''. Bairoch noted that this is roughly the size of Great Britain. The urban theorist Jane Jacobs suggests that city formation preceded the birth of agriculture, but this view is not widely accepted.
In his book City Economics, Brendan O'Flaherty asserts "Cities could persist -- as they have for thousands of years -- only if their advantages offset the disadvantages ''. O'Flaherty illustrates two similar attracting advantages known as increasing returns to scale and economies of scale, which are concepts usually associated with businesses. Their applications are seen in more basic economic systems as well. Increasing returns to scale occurs when "doubling all inputs more than doubles the output (and) an activity has economies of scale if doubling output less than doubles cost ''.
Similarly, "Are Cities Dying? '', a paper by Harvard economist Edward L. Glaeser, delves into similar reasons for city formation: reduced transport costs for goods, people and ideas. Discussing the benefits of proximity, Glaeser claims that if a city is doubled in size, workers get a ten percent increase in earnings. Glaeser furthers his argument by stating that bigger cities do not pay more for equal productivity than in a smaller city, so it is reasonable to assume that workers become more productive if they move to a city twice the size as they initially worked in. The workers do not benefit much from the ten percent wage increase, because it is recycled back into the higher cost of living in a larger city. They do gain other benefits from living in cities, though.
The first true towns are sometimes considered large settlements where the inhabitants were no longer simply farmers of the surrounding area, but began to take on specialized occupations, and where trade, food storage and power were centralized. In 1950 Gordon Childe attempted to define a historic city with 10 general metrics. These are:
This categorisation is descriptive, and it is used as a general touchstone when considering ancient cities, although not all have each of its characteristics.
The more complex human societies, called the first civilizations emerged around 3000 BC in the river valleys of Mesopotamia, India, China, and Egypt. An increase in food production led to the significant growth in human population and the rise of cities. The peoples of Southwest Asia and Egypt laid the foundations of Western civilization, they developed cities and struggled with the problems of organised states as they moved from individual communities to larger territorial units and eventually to empires. Among these early civilizations, Egypt is exceptional for its apparent lack of big cities.
The growth of the population of ancient civilizations, the formation of ancient empires concentrating political power, and the growth in commerce and manufacturing led to ever greater capital cities and centres of commerce and industry, with Alexandria, Antioch and Seleucia of the Hellenistic civilization, Pataliputra (now Patna) in India, Chang'an (now Xi'an) in China, Carthage, ancient Rome, its eastern successor Constantinople (later Istanbul).
The roster of early urban traditions is notable for its diversity. Excavations at early urban sites show that some cities were sparsely populated political capitals, others were trade centers, and still other cities had a primarily religious focus. Some cities had large dense populations, whereas others carried out urban activities in the realms of politics or religion without having large associated populations. Theories that attempt to explain ancient urbanism by a single factor, such as economic benefit, fail to capture the range of variation documented by archaeologists.
Ancient Mesopotamia, the area of the Tigris and Euphrates within modern day Iraq and Syria, was home to numerous cities by the third millennium BC. These cities formed the basis of the Sumerian and subsequent cultures. Cities such as Jericho, Uruk, Ur, Ninevah, and Babylon, made legendary by the Bible, have been located and excavated, while others such as Damascus and Jerusalem have been continuously populated.
The Phoenician trading empire, flourishing around the turn of the first millennium BC, encompassed numerous cities extending from Tyre, Cydon, and Byblos, across the Mediterranean to Carthage (in modern Tunisia) and Cádiz (in modern Spain). The name of Melqart, an important Phoenician deity, comes from M-L-K and Q-R-T, meaning "king '' and "city ''.
Beginning in the early first millennium, independent city - states in Greece began to flourish, evolving the notion of citizenship, becoming in the process the archetype of the free city, the polis. The agora, meaning "gathering place '' or "assembly '', was the center of athletic, artistic, spiritual and political life of the polis. These Greek city - states reached great levels of prosperity that resulted in an unprecedented cultural boom, that of classical Greece, expressed in architecture, drama, science, mathematics and philosophy, and nurtured in Athens under a democratic government. The Greek Hippodamus of Miletus (c. 407 BC) has been dubbed the "Father of City Planning '' for his design of Miletus; the Hippodamian, or grid plan, was the basis for subsequent Greek and Roman cities. In the 4th century BC, Alexander the Great commissioned Dinocrates of Rhodes to lay out his new city of Alexandria, the grandest example of idealized urban planning of the ancient Mediterranean world, where the city 's regularity was facilitated by its level site near a mouth of the Nile.
The rise of Rome again shifted the locus of political power, resulting in economic and demographic gain for the city of Rome itself, and a new political regime in the form of the Roman Empire. Rome founded many cities (coloniae), characteristically imposing a grid pattern made of north -- south cardines and east -- west decumani. The intersection of the cardo maximus and the decumanus maximus marked the origin of the city grid. Following these standard plans, Rome founded hundreds of cities and exerted substantial influence toward urbanizing the Mediterranean. In the process, Rome developed sanitation, public housing, public buildings and the forum. In the late Roman Empire political power was increasingly held by bishops of the Christian Church.
The Indus Valley Civilization and ancient China are two other areas with major indigenous urban traditions. Among the early Old World cities, Mohenjo - daro of the Indus Valley Civilization in present - day Pakistan, existing from about 2600 BC, was one of the largest, with a population of 50,000 or more and a sophisticated sanitation system.
China 's planned cities date to the turn of the second millennium BC. City - states emerging at this time used geomancy to locate and plan cities, orienting their walls to cardinal points. Symbolic cities were constructed as celestial microcosms, with the central point corresponding to the pole star representing harmony and connection between the earthly and other realms. In Chang'an the imperial palace lay to the north, facing south, absorbing the light of the sun, and royalty slept with their heads to the north and their feet to the south. Next came the Imperial City, and then the people 's city, divided into eastern and western halves.
Agriculture was practiced in sub-Saharan Africa since the third millennium BC. Because of this, cities could develop as centers of non-agricultural activity, well before the influence of Arab urban culture. One of the oldest sites documented thus far, Jenné - Jeno in what is today Mali, has been dated to the third century BC. According to Roderick and Susan McIntosh, Jenné - Jeno did not fit into traditional Western conceptions of urbanity as it lacked monumental architecture and a distinctive elite social class, but it should indeed be considered a city based on a functional redefinition of urban development. In particular, Jenné - Jeno featured settlement mounds arranged according to a horizontal, rather than vertical, power hierarchy, and served as a center of specialized production and exhibited functional interdependence with the surrounding hinterland. Archaeological evidence from Jenné - Jeno, specifically the presence of non-West African glass beads dated from the third century BC to the fourth century AD, indicates that pre-Arabic trade contacts probably existed between Jenné - Jeno and North Africa. Additionally, other early urban centers in sub-Saharan Africa, dated to around 500 AD, include Awdaghust, Kumbi - Saleh the ancient capital of Ghana, and Maranda a center located on a trade rout between Egypt and Gao.
In the ancient Americas, early urban traditions developed in the Andes and Mesoamerica. In the Andes, the first urban centers developed in the Norte Chico civilization (also Caral or Caral - Supe civilization), Chavin and Moche cultures, followed by major cities in the Huari, Chimu and Inca cultures. The Norte Chico civilization included as many as 30 major population centers in what is now the Norte Chico region of north - central coastal Peru. It is the oldest known civilization in the Americas, flourishing between the 30th century BC and the 18th century BC. Mesoamerica saw the rise of early urbanism in several cultural regions, including the Preclassic Maya, the Zapotec of Oaxaca, and Teotihuacan in central Mexico. Later cultures such as the Aztec drew on these earlier urban traditions.
Teotihuacan, flourishing from 200 BC to AD 750, was the largest American city of the pre-Columbian era, possibly reaching a population of 125,000 in AD 200. The city 's grid plan originated with the "Avenue of the Dead '', connecting the Temple of the Feathered Serpent and the Pyramid of the Moon. Beyond its ceremonial center the city featured religious buildings (23 temple complexes) and myriad workshops. Although its religious system was clearly expansive and significant, details of its political and economic functioning remain matters of speculation.
In the remnants of the Roman Empire, cities of late antiquity at first gained independence, but lost their population and their importance, starting in Roman Britain and Germania. The locus of power in the West shifted to Constantinople and to the ascendant Islamic civilization with its major cities Baghdad, Cairo, and Córdoba.
From the 9th through the end of the 12th century, Constantinople, capital of the Byzantine Empire, was the largest and wealthiest city in Europe, with a population approaching 1 million. Following the Byzantine -- Ottoman wars and other conflicts, the Ottoman Empire gained control over many cities in the Mediterranean area, including Constantinople in 1453.
During the European Middle Ages, a town was as much a political entity as a collection of houses. City residence brought freedom from customary rural obligations to lord and community: "Stadtluft macht frei '' ("City air makes you free '') was a saying in Germany. In Continental Europe cities with a legislature of their own were not unheard of, the laws for towns as a rule other than for the countryside, the lord of a town often being another than for surrounding land. In the Holy Roman Empire, some cities had no other lord than the emperor. Some planned towns were created, in Britain by King Edward I to colonize Wales and in France, bastides, fortified cities designed on a regular plan.
By the thirteenth and fourteenth centuries some cities become powerful states, taking surrounding areas under their control or establishing extensive maritime empires. In Italy medieval communes developed into city - states including the Republic of Venice and the Republic of Genoa. These cities, with populations in the tens of thousands, amassed enormous wealth by means of extensive trade in eastern luxury goods such as spices and silk, as well as iron, timber, and slaves. Venice introduced the ghetto, a specially regulated neighborhood for Jews only. In Northern Europe, cities including Lübeck and Bruges formed the Hanseatic League for collective defense and commerce. Their power was later challenged and eclipsed by the Dutch commercial cities of Ghent, Ypres, and Amsterdam. (City rights were granted by nobility.) The city 's central function was commerce, enabled by waterways and ports; the cities themselves were heavily fortified with walls and sometimes moats.
Similar phenomena existed elsewhere, as in the case of Sakai, which enjoyed a considerable autonomy in late medieval Japan.
In the first millennium AD, an urban tradition developed in the Khmer region of Cambodia, where Angkor grew into one of the largest cities (in area) of the world. The closest rival to Angkor, the Mayan city of Tikal in Guatemala, was between 100 and 150 square kilometres (39 and 58 sq mi) in total size. Although its population remains a topic of research and debate, newly identified agricultural systems in the Angkor area may have supported up to one million people.
While the city - states, or poleis, of the Mediterranean and Baltic Sea languished from the 16th century, Western Europe 's larger capitals grew again as commercial hubs, especially following the emergence of an Atlantic trade. By the early 19th century, London had become the largest city in the world with a population of over a million, while Paris rivaled the well - developed regionally traditional capital cities of Baghdad, Beijing, Istanbul and Kyoto. Bastion forts arose in an attempt to make cities defensible against strengthening military firepower.
The Aztec city of Tenochtitlan, in present day Mexico, had an estimated population between 200,000 and 300,000 when the Spanish conquistador Hernán Cortés arrived in 1519. During the Spanish colonization of the Americas the old Roman city concept was extensively used. Cities were founded in the middle of the newly conquered territories, and were bound to several laws about administration, finances and urbanism.
Most towns remained small, so that in 1500 only some two dozen places in the world contained more than 100,000 inhabitants. As late as 1700, there were fewer than forty, a figure that rose to 300 in 1900.
The growth of modern industry from the late 18th century onward led to massive urbanization and the rise of new great cities, first in Europe and then in other regions, as new opportunities brought huge numbers of migrants from rural communities into urban areas. England led the way as London became the capital of a world empire and cities across the country grew in locations strategic for manufacturing. In the United States from 1860 to 1910, the introduction of railroads reduced transportation costs, and large manufacturing centers began to emerge, fueling migration from rural to city areas.
Industrialized cities became deadly places to live, due to health problems resulting from overcrowding, occupational hazards of industry, contaminated water and air, poor sanitation, and communicable diseases such as typhoid and cholera. Factories and slums emerged as regular features of the urban landscape.
The 19th century saw the rise of public transportation, such as horsebuses, followed by horse trams. At the end of the 19th century, electric urban rail transport (including trams and rapid transit) began to replace them, later completed with buses and other motor vehicles.
Street lights were uncommon until gas lighting became widespread in Europe in the early 19th century. Fuel gas was also used for heating and cooking. From the 1880s, electrification began, making electricity the main energy medium in cities until present day.
Modern water supply networks began to expand during the 19th century.
Growth of cities continued through the twentieth century and increased dramatically in the Third World (including India, China, and Africa), due to industrialization, active promotion of urbanization, and other factors.
Urban planning became widespread and professionalized. At the turn of the century, the "garden city '' model became the icon of a self - contained, comprehensively designed, residential and commercial settlement. Professional urban planners appeared in large numbers, not only to design cities, but to provide technical expertise to their administration.
Cities in the great depression of the 1930s, especially those with a base in heavy industry, were hard hit by unemployment. In the U.S. urbanization rate increased forty to eighty percent during 1900 -- 1990. Today the world 's population is slightly over half urban, and continues to urbanize, with roughly a million people moving into cities every 24 hours worldwide.
During the 20th century, car ownership has increased steady, parallel with suburban sprawl, highways and other development for the car. Awareness of ecology in the mid-20th century created the environmental movement, which has addressed the need for sustainable development, especially sustainable development.
In the second half of the twentieth century, deindustrialization (or "economic restructuring '') in the West led to poverty, homelessness, and urban decay in formerly prosperous cities. America 's "Steel Belt '' became a "Rust Belt '' and cities such as Detroit, Michigan, and Gary, Indiana began to shrink, contrary to the global trend of massive urban expansion. Under the Great Leap Forward and subsequent five - year plans continuing today, the People 's Republic of China has undergone concomitant urbanization and industrialization and to become the world 's leading manufacturer.
There is a debate about whether technology and instantaneous communications are making cities obsolete, or reinforcing the importance of big cities as centres of the knowledge economy. Knowledge - based development of cities, globalization of innovation networks, and broadband services are driving forces of a new city planning paradigm towards smart cities that use technology and communication to create more efficient agglomerations in terms of competitiveness, innovation, environment, energy, utilities, governance, and delivery of services to the citizen. Some companies are building brand new masterplanned cities from scratch on greenfield sites.
|
when did korea gain its independence from japan | Korean independence movement - wikipedia
The Korean independence movement was a military and diplomatic campaign to achieve the independence of Korea from Japan. After the Japanese annexation of Korea in 1910, local resistance in Korea culminated in the March 1st Movement in 1919, which was crushed and sent Korean leaders to flee into China. In China, Korean independence activists built ties with the Chinese Nationalist Government which supported the Provisional Government of the Republic of Korea (KPG), as a government in exile. At the same time, the Korean Liberation Army, which operated under the Chinese National Military Council and then the KPG, led attacks against Japan.
After the outbreak of the Pacific War, China became one of the Allies of World War II. In the Second Sino - Japanese War, China attempted to use this influence to assert Allied recognition of the KPG. However, the United States was skeptical of Korean unity and readiness for independence, preferring an international trusteeship - like solution for the peninsula. Although China achieved agreement by the Allies on eventual Korean independence in the Cairo Declaration of 1943, continued disagreement and ambiguity about the postwar Korean government lasted until Soviet -- Japanese War created a de facto division of Korea into Soviet and American zones, prompting the Korean War.
The date of the Surrender of Japan is an annual holiday called Gwangbokjeol ("Restoration of Light Day '') in South Korea, and Chogukhaebangŭi nal ("Fatherland Liberation Day '') in North Korea.
The last independent Korean monarchy, the Joseon dynasty, lasted over 500 years (1392 -- 1910), both as the Joseon Kingdom and later as the Empire of Korea. Its international status and policies were conducted primarily through careful diplomacy with the power en vogue in China (during this period of time dynastic control of China saw the end of the Yuan dynasty and the rise and fall of both the Ming dynasty and the Qing dynasty), though other interactions with other international entities were not absent. Through this maneuvering and a dedicated adherence to strict Neo-Confucianist foreign and domestic policies, Joseon Korea retained control over its internal affairs and relative international autonomy though technically a suzerain of the ruling Chinese dynasties for most of this period. These policies were effective in maintaining Korea 's relative independence and domestic autonomy in spite of a number of regional upheavals and a number of invasions (including the Japanese invasions of Korea from 1592 -- 98 as well as the First and Second Manchu invasions of Korea).
However, in the late 19th and early 20th centuries, with the rise of Western imperialism boosted by the Industrial Revolution and other major international trends, the weakening of China also made Korea vulnerable to foreign maneuvering and encroachment, both as a target in and of itself and as a stepping - stone to the "larger prize '' of China. This period (roughly from 1870 until annexation by Japan in 1910) was marked in Korea by major upheavals, many intrigues, the inability of Joseon Korea and the later Empire of Korea to right itself amidst all of the maneuvering around it by larger powers (including, but not limited to, Imperial Russia, Japan, China, and to a lesser extent France, Great Britain, and the United States), revolts / insurrections, and other indicators of a turbulent time. By the end of the First Sino - Japanese War in 1895 it was evident internationally that China could no longer protect its international interests, much less its own, against its opponents, and that its attempts to modernize its military and institutions were unsuccessful.
Among other things, the Treaty of Shimonoseki that ended this war stipulated that China would relinquish suzerainty and influence over Korea, recognize Korea 's full independence and autonomy, and end the tribute system which had linked China and Korea for many centuries. In practical reality, this stipulation implied the handover of primary foreign influence in Korea from China to Japan, as Japanese forces had taken positions in the Korean Peninsula during the course of the war. This paved the way for Imperial Japan to tighten its influence on Korea without official Chinese intervention. In 1905, the Eulsa Treaty made the Empire of Korea (Korean imperial status had been established in 1897 to put King Gojong on equal legal footing with his neighboring sovereigns and to fully sever Korea 's superficial ties of suzerainty to China) a protectorate of Japan; in 1907, the Japan -- Korea Treaty of 1907 stipulated that Korea 's policies would be enacted and enforced under the guidance of the Japanese resident general; and in 1910, through the Japan -- Korea Annexation Treaty, Japan officially declared its annexation of Korea, a move for which Japan had been preparing for an extended period of time. All of these treaties were procured under duress, and though under duress, Emperor Sunjong of Korea refused to sign any of them and considered them illegal and not binding (though he had no real power to oppose its enactment and enforcement).
Notably, both the 1905 treaty (and by extension the 1907 treaty) and the 1910 annexation treaty were declared "already null and void '' when the normalization of relations between the Republic of Korea and Japan was negotiated in 1965.
The Japanese rule that ensued was oppressive to a far - reaching degree, giving rise to many Korean resistance movements. By 1919 these became nationwide, marked by what became known as the March 1st Movement.
Japanese rule was oppressive but changed over time. Initially, there was very harsh repression in the decade following annexation. Japan 's rule was markedly different than in its other colony, Formosa. This period is called "amhukki '', the dark period by Koreans. Tens of thousands of Koreans were arrested for political reasons. The harshness of Japanese rule increased support for the Korean independence movement. Many Koreans left the country, some of whom formed societies in Manchuria to agitate for Korean independence. Some went to Japan, where groups agitated clandestinely. There was a prominent group of Korean Communists in Japan, who were in danger for their political activities.
Partly due to Korean opposition to colonial policies, this was followed by a relaxation of some harsh policies. The Korean crown prince married the Japanese princess Nashimoto. The ban on Korean newspapers was lifted, allowing publication of Choson Ilbo and The Dong - a Ilbo. Korean government workers received the same wages as Japanese officials, though the Japanese officials received bonuses the Koreans did not. Whippings were eliminated for minor offenses but not for others. Laws interfering with burial, slaughtering of animals, peasant markets, or traditional customs were removed or changed.
After the Peace Preservation Law of 1925, some freedoms were restricted. Then, in the lead up to the invasion of China and World War II, the harshness of Japanese rule increased again.
Although the Empire of Japan had invaded and occupied northeast China from 1931, the Nationalist Government of China tried to avoid declaring war against Japan until the Empire directly attacked Beijing in 1937, sparking the Second Sino - Japanese War. After the United States declared war on Japan in 1941, China became an Ally of World War II, and tried to exercise its influence within the group to support Asian anticolonialist nationalism, which included the demand of the complete surrender of Japan and immediate independence of Korea afterwards.
China tried to promote the legitimacy of the Provisional Government of Korea (KPG), which was established by Koreans in China after the suppression of the March 1st Movement in Korea. The KPG was ideologically aligned with the Chinese government of the time, as independence leader Kim Gu had agreed to Chiang Kai - shek 's suggestion to adopt the Chinese Three Principles of the People program in exchange for financial aid. At the same time, China supported the leftist independence leader Kim Won - bong and convinced the two Kims to form the unified Korean Liberation Army (KLA). Under the terms in which the KLA was allowed to operate in China, it became an auxiliary of China 's National Revolutionary Army until 1945. China 's National Military Council had also decided that "complete independence '' for Korea was China 's fundamental Korean policy; otherwise, the government in Chongqing tried to unify the warring Korean factions.
Although Chiang and Korean leaders like Syngman Rhee tried to influence the US State Department to support Korean independence and recognize the KPG, the Far Eastern Division was skeptical. Its argument was that the Korean people "were emasculated politically '' after decades of Japanese rule, and showed too much disunity, preferring a condominium solution for Korea that involved the Soviets. China was adamantly opposed to Soviet influence in Korea after hearing about Soviet atrocities in Poland since its "liberation ''. By the Cairo Conference, the US and China came to agree on Korean independence "in due course '', with China still pressing for immediate recognition of the exile government and a tangible date for independence. After Soviet - American relations deteriorated, on August 10, 1945 the United States Department of War agreed that China should land troops in Pusan, Korea from which to prevent a Soviet takeover. However, this turnaround was too late to prevent the division of Korea, as the Red Army quickly occupied northern Korea that same month.
Although there were many separate movements against colonial rule, the main ideology or purpose of the movement was to free Korea from the Japanese military and political rule. Koreans were concerned with alien domination and Korea 's state as a colony. They desired to restore Korea 's independent political sovereignty after Japan invaded the weakened and partially modernized Korean Empire. This was the result of Japan 's political maneuvers to secure international approval for the annexation of treaty annexing Korea.
During the independence movement, the rest of the world viewed Korea 's resistance movement as a racial anti-imperialist, anti-militarist rebellion, and an anti-Japanese resistance movement. Koreans, however, saw the movement as a step to free Korea from the Japanese military rule.
The South Korean government is (or at least was) criticized for not accepting Korean socialists who fought for Korean independence.
There was no main strategy or tactic that was prevalent throughout the resistance movement, but there were prominent stages where certain tactics or strategies were prominent.
From 1905 to 1910, most of the movement 's activities were closed off to the elite class or rare scholar. During this time, militaristic and violent attempts were taken to resist the Japanese. Most of the attempts were disorganized, scattered, and leaderless to prevent arrests and surveillance by Japan.
From 1910 to 1919, was a time of education during the colonial era. Many Korean textbooks on grammar and spelling were circulated in schools. It started the trend of intellectual resistance to the Japanese rule. This period, along with Woodrow Wilson 's progressive principles, created an aware, nationalist, and eager student population. After the March 1st movement of 1919, strikes became prominent in the movement. Up to 1945, universities were used as a haven and source of students who further supported the movement. This support system led to the improvement of school facilities. From 1911 to 1937, Korea was dealing with economic problems (with the rest of the world, going through the Great Depression after World War I). There were many labor complaints that contributed to the grievances against Japan 's colonial rule. During this period, there were 159,061 disputes with workers concerned with wages and 1018 disputes involving 68,686 farmers in a tenant position. In 1926 the disputes started to increase at a fast pace and movements concerning labor emerged more within the Independence Movement.
There were broadly three kinds of national liberation groups: (a) the religious groups which grew out of the Korean Confucianist and Christian communities; (b) the former military and the irregular army groups; and (c) business and intellectual expatriates who formed the theoretical and political framework abroad.
Koreans brought Catholicism to Korea towards the end of the 18th century and faced intense persecution. Methodist and Presbyterian missionaries followed in the 19th century starting off a renaissance with more liberal thoughts on issues of equality and woman 's rights, which the strict Confucian tradition would not permit.
The early Korean Christian missionaries both led the Korean independence from 1890 through 1907, and later the creation of a Korean liberation movement from 1907 to 1945. Korean Christians suffered martyrdoms, crucifixions, burnings to death, police interrogations and massacres by the Japanese.
Amongst the major religious nationalist groups were:
and in World War II.
Supporters of these groups included French, Czech, Chinese and Russian arms merchants, as well as Chinese nationalist movements.
Expatriate liberation groups were active in Shanghai, northeast China, parts of Russia, Hawaii, and San Francisco. Groups were even organised in areas without many expatriate Koreans, such as the one established in 1906 in Colorado by Park Hee Byung. The culmination of expatriate success was the Shanghai declaration of independence.
Sun Yat - sen was an early supporter of Korean struggles against Japanese invaders. By 1925, Korean expatriates began to cultivate two - pronged support in Shanghai: from Chiang Kai - Shek 's Kuomintang, and from early communist supporters, who later branched into the Communist Party of China.
Little real support came through, but that which did developed long standing relationships that contributed to the dividing of Korea after 1949, and the polar positions between south and north.
The constant infighting within the Yi family, the nobles, the confiscation of royal assets, the disbanding of the royal army by the Japanese, the execution of seniors within Korea by Japan, and comprehensive assassinations of Korean royalty by Japanese mercenaries, led to great difficulties in royal descendants and their family groups in finding anything but a partial leadership within the liberation movement. A good many of the Righteous army commanders were linked to the family but these generals and their Righteous army groups were largely eliminated by 1918; and cadet members of the families contributed towards establishing both republics post-1945.
|
average population of a county in the us | County statistics of the United States - wikipedia
In 48 of the 50 states of the United States, the county is used for the level of local government immediately below the state itself. Louisiana uses parishes, and Alaska uses boroughs. In several states in New England, some or all counties within states have no governments of their own; the counties continue to exist as legal entities, however, and are used by states for some administrative functions and by the United States Census bureau for statistical analysis. There are 3,142 counties and county equivalent administrative units in total, including the District of Columbia.
There are 41 independent cities in the United States. In Virginia, any municipality that is incorporated as a city legally becomes independent of any county. Where indicated, the statistics below do not include Virginia 's 38 independent cities.
In Alaska, most of the land area of the state has no county - level government. Those parts of the state are divided by the United States Census Bureau into census areas, which are not the same as boroughs. The state 's largest statistical division by area is the Yukon -- Koyukuk Census Area, which is larger than any of the state 's boroughs. Although Anchorage is called a municipality, it is considered a consolidated city and borough.
Lists of counties and county equivalents by number per political division:
These rankings include county equivalents.
The following tables exclude county equivalents. The largest counties and county equivalents are organized boroughs and the census areas of Alaska with the top two being Yukon -- Koyukuk Census Area (145,504.79 sq mi or 376,855.7 km) and North Slope Borough (88,695.41 sq mi or 229,720.1 km). The smallest counties and county equivalents are the independent cities of Virginia with the extreme being Falls Church (2.00 sq mi or 5.2 km).
If independent cities are included, Falls Church becomes the smallest county in the state, and in fact the smallest county - level political subdivision in the United States, at 2.0 square miles (5.2 km).
This list excludes Alaskan Census Areas, but includes other county equivalents. The North Slope Borough is the largest independently incorporated county equivalent. The Unorganized Borough is substantially larger, but is an extension of the State of Alaska government and not independently incorporated.
Also note that the smallest land area with county - level governance in the U.S. is Falls Church, Virginia, but it is an independent city and not a county or part of one. Kalawao County, Hawaii is the smallest true county by land area.
Data presented below is based on U.S. Census department data from 2010. Calculations are made by dividing the population by the land area. All county equivalents are included.
This list generated by dividing the population by the land area. All county equivalents are included. The list is dominated by just a few states: Alaska, Montana, and Texas together comprise about two - thirds of the entries. The Unorganized Borough is not included here as a unit, but its census areas (non-governmental entities) are. If the census areas were removed from the list, the Unorganized Borough would rank fourteenth with a density of 0.38 per square mile (0.15 / km).
Data presented below is based on U.S. Census Bureau data from 2010. Calculations are made by dividing the population by the land area. All county equivalents are included.
Excluding the census areas of Alaska, Lake and Peninsula Borough is the least densely populated county equivalent with 0.069 / sq mi (0.027 / km).
^ A: The Unorganized Borough, Alaska, formed by the Borough Act of 1961, is a legal entity, run by the Alaska state government as an extension of State government, it and the independently incorporated Unified, Home Rule, First Class and Second Class boroughs roughly correspond to parishes in Louisiana and to counties in the other 48 states.
|
where is the journey supposed to end in the canterbury tales | The Canterbury Tales - wikipedia
The Canterbury Tales (Middle English: Tales of Caunterbury) is a collection of 24 stories that runs to over 17,000 lines written in Middle English by Geoffrey Chaucer between 1387 and 1400. In 1386, Chaucer became Controller of Customs and Justice of Peace and, in 1389, Clerk of the King 's work. It was during these years that Chaucer began working on his most famous text, The Canterbury Tales. The tales (mostly written in verse, although some are in prose) are presented as part of a story - telling contest by a group of pilgrims as they travel together on a journey from London to Canterbury to visit the shrine of Saint Thomas Becket at Canterbury Cathedral. The prize for this contest is a free meal at the Tabard Inn at Southwark on their return.
After a long list of works written earlier in his career, including Troilus and Criseyde, House of Fame, and Parliament of Fowls, The Canterbury Tales is near - unanimously seen as Chaucer 's magnum opus. He uses the tales and descriptions of its characters to paint an ironic and critical portrait of English society at the time, and particularly of the Church. Chaucer 's use of such a wide range of classes and types of people was without precedent in English. Although the characters are fictional, they still offer a variety of insights into customs and practices of the time. Often, such insight leads to a variety of discussions and disagreements among people in the 14th century. For example, although various social classes are represented in these stories and all of the pilgrims are on a spiritual quest, it is apparent that they are more concerned with worldly things than spiritual. Structurally, the collection resembles Giovanni Boccaccio 's The Decameron, which Chaucer may have read during his first diplomatic mission to Italy in 1372.
It has been suggested that the greatest contribution of The Canterbury Tales to English literature was the popularisation of the English vernacular in mainstream literature, as opposed to French, Italian or Latin. English had, however, been used as a literary language centuries before Chaucer 's time, and several of Chaucer 's contemporaries -- John Gower, William Langland, the Pearl Poet, and Julian of Norwich -- also wrote major literary works in English. It is unclear to what extent Chaucer was seminal in this evolution of literary preference.
While Chaucer clearly states the addressees of many of his poems, the intended audience of The Canterbury Tales is more difficult to determine. Chaucer was a courtier, leading some to believe that he was mainly a court poet who wrote exclusively for nobility.
The Canterbury Tales is generally thought to have been incomplete at the end of Chaucer 's life. In the General Prologue, some 30 pilgrims are introduced. According to the Prologue, Chaucer 's intention was to write four stories from the perspective of each pilgrim, two each on the way to and from their ultimate destination, St. Thomas Becket 's shrine (making for a total of about 120 stories). Although perhaps incomplete, The Canterbury Tales is revered as one of the most important works in English literature. It is also open to a wide range of interpretations.
The question of whether The Canterbury Tales is a finished work has not been answered to date. There are 84 manuscripts and four incunable editions of the work, dating from the late medieval and early Renaissance periods, more than for any other vernacular literary text with the exception of The Prick of Conscience. This is taken as evidence of the Tales ' popularity during the century after Chaucer 's death. Fifty - five of these manuscripts are thought to have been originally complete, while 28 are so fragmentary that it is difficult to ascertain whether they were copied individually or as part of a set. The Tales vary in both minor and major ways from manuscript to manuscript; many of the minor variations are due to copyists ' errors, while it is suggested that in other cases Chaucer both added to his work and revised it as it was being copied and possibly as it was being distributed. Determining the text of the work is complicated by the question of the narrator 's voice which Chaucer made part of his literary structure.
Even the oldest surviving manuscripts of the Tales are not Chaucer 's originals. The very oldest is probably MS Peniarth 392 D (called "Hengwrt ''), written by a scribe shortly after Chaucer 's death. The most beautiful, on the other hand, is the Ellesmere Manuscript, a manuscript whose order and many editors have followed even down to the present day. The first version of The Canterbury Tales to be published in print was William Caxton 's 1476 edition. Only 10 copies of this edition are known to exist, including one held by the British Library and one held by the Folger Shakespeare Library.
In 2004, Linne Mooney claimed that she was able to identify the scrivener who worked for Chaucer as an Adam Pinkhurst. Mooney, then a professor at the University of Maine and a visiting fellow at Corpus Christi College, Cambridge, said she could match Pinkhurst 's signature, on an oath he signed, to his handwriting on a copy of The Canterbury Tales that might have been transcribed from Chaucer 's working copy.
In the absence of consensus as to whether or not a complete version of the Tales exists, there is also no general agreement regarding the order in which Chaucer intended the stories to be placed.
Textual and manuscript clues have been adduced to support the two most popular modern methods of ordering the tales. Some scholarly editions divide the Tales into ten "Fragments ''. The tales that make up a Fragment are closely related and contain internal indications of their order of presentation, usually with one character speaking to and then stepping aside for another character. However, between Fragments, the connection is less obvious. Consequently, there are several possible orders; the one most frequently seen in modern editions follows the numbering of the Fragments (ultimately based on the Ellesmere order). Victorians frequently used the nine "Groups '', which was the order used by Walter William Skeat whose edition Chaucer: Complete Works was used by Oxford University Press for most of the twentieth century, but this order is now seldom followed.
An alternative ordering (seen in an early manuscript containing The Canterbury Tales, the early - fifteenth century Harley MS. 7334) places Fragment VIII before VI. Fragments I and II almost always follow each other, just as VI and VII, IX and X do in the oldest manuscripts. Fragments IV and V, by contrast, vary in location from manuscript to manuscript.
Chaucer wrote in late Middle English, which has clear differences from Modern English. From philological research, we know certain facts about the pronunciation of English during the time of Chaucer. Chaucer pronounced - e at the end of words, so that care was (ˈkaːrə), not / ˈkɛər / as in Modern English. Other silent letters were also pronounced, so that the word knight was (kniçt), with both the k and the gh pronounced, not / ˈnaɪt /. In some cases, vowel letters in Middle English were pronounced very differently from Modern English, because the Great Vowel Shift had not yet happened. For instance, the long e in wepyng "weeping '' was pronounced as (eː), as in modern German or Italian, not as / iː /. Below is an IPA transcription of the opening lines of The Merchant 's Prologue:
Although no manuscript exists in Chaucer 's own hand, two were copied around the time of his death by Adam Pinkhurst, a scribe with whom he seems to have worked closely before, giving a high degree of confidence that Chaucer himself wrote the Tales. Because the final - e sound was lost soon after Chaucer 's time, scribes did not accurately copy it, and this gave scholars the impression that Chaucer himself was inconsistent in using it. It has now been established, however, that - e was an important part of Chaucer 's grammar, and helped to distinguish singular adjectives from plural and subjunctive verbs from indicative.
No other work prior to Chaucer 's is known to have set a collection of tales within the framework of pilgrims on a pilgrimage. It is obvious, however, that Chaucer borrowed portions, sometimes very large portions, of his stories from earlier stories, and that his work was influenced by the general state of the literary world in which he lived. Storytelling was the main entertainment in England at the time, and storytelling contests had been around for hundreds of years. In 14th - century England the English Pui was a group with an appointed leader who would judge the songs of the group. The winner received a crown and, as with the winner of The Canterbury Tales, a free dinner. It was common for pilgrims on a pilgrimage to have a chosen "master of ceremonies '' to guide them and organise the journey. Harold Bloom suggests that the structure is mostly original, but inspired by the "pilgrim '' figures of Dante and Virgil in The Divine Comedy. New research suggests that the General Prologue, in which the innkeeper and host Harry Bailey introduces each pilgrim, is a pastiche of the historical Harry Bailey 's surviving 1381 poll - tax account of Southwark 's inhabitants.
The Decameron by Giovanni Boccaccio contains more parallels to The Canterbury Tales than any other work. Like the Tales, it features a number of narrators who tell stories along a journey they have undertaken (to flee from the Black Death). It ends with an apology by Boccaccio, much like Chaucer 's Retraction to the Tales. A quarter of the tales in The Canterbury Tales parallel a tale in the Decameron, although most of them have closer parallels in other stories. Some scholars thus find it unlikely that Chaucer had a copy of the work on hand, surmising instead that he must have merely read the Decameron at some point, while a new study claims he had a copy of the Decameron and used it extensively as he began work on his own collection. Each of the tales has its own set of sources that have been suggested by scholars, but a few sources are used frequently over several tales. They include poetry by Ovid, the Bible in one of the many vulgate versions in which it was available at the time (the exact one is difficult to determine), and the works of Petrarch and Dante. Chaucer was the first author to use the work of these last two, both Italians. Boethius ' Consolation of Philosophy appears in several tales, as the works of John Gower do. Gower was a known friend to Chaucer. A full list is impossible to outline in little space, but Chaucer also, lastly, seems to have borrowed from numerous religious encyclopaedias and liturgical writings, such as John Bromyard 's Summa praedicantium, a preacher 's handbook, and Jerome 's Adversus Jovinianum. Many scholars say there is a good possibility Chaucer met Petrarch or Boccaccio.
The Canterbury Tales is a collection of stories built around a frame narrative or frame tale, a common and already long established genre of its period. Chaucer 's Tales differs from most other story "collections '' in this genre chiefly in its intense variation. Most story collections focused on a theme, usually a religious one. Even in the Decameron, storytellers are encouraged to stick to the theme decided on for the day. The idea of a pilgrimage to get such a diverse collection of people together for literary purposes was also unprecedented, though "the association of pilgrims and storytelling was a familiar one ''. Introducing a competition among the tales encourages the reader to compare the tales in all their variety, and allows Chaucer to showcase the breadth of his skill in different genres and literary forms.
While the structure of the Tales is largely linear, with one story following another, it is also much more than that. In the General Prologue, Chaucer describes not the tales to be told, but the people who will tell them, making it clear that structure will depend on the characters rather than a general theme or moral. This idea is reinforced when the Miller interrupts to tell his tale after the Knight has finished his. Having the Knight go first gives one the idea that all will tell their stories by class, with the Monk following the Knight. However, the Miller 's interruption makes it clear that this structure will be abandoned in favour of a free and open exchange of stories among all classes present. General themes and points of view arise as the characters tell their tales, which are responded to by other characters in their own tales, sometimes after a long lapse in which the theme has not been addressed.
Lastly, Chaucer does not pay much attention to the progress of the trip, to the time passing as the pilgrims travel, or to specific locations along the way to Canterbury. His writing of the story seems focused primarily on the stories being told, and not on the pilgrimage itself.
The variety of Chaucer 's tales shows the breadth of his skill and his familiarity with many literary forms, linguistic styles, and rhetorical devices. Medieval schools of rhetoric at the time encouraged such diversity, dividing literature (as Virgil suggests) into high, middle, and low styles as measured by the density of rhetorical forms and vocabulary. Another popular method of division came from St. Augustine, who focused more on audience response and less on subject matter (a Virgilian concern). Augustine divided literature into "majestic persuades '', "temperate pleases '', and "subdued teaches ''. Writers were encouraged to write in a way that kept in mind the speaker, subject, audience, purpose, manner, and occasion. Chaucer moves freely between all of these styles, showing favouritism to none. He not only considers the readers of his work as an audience, but the other pilgrims within the story as well, creating a multi-layered rhetorical puzzle of ambiguities. Thus Chaucer 's work far surpasses the ability of any single medieval theory to uncover.
With this, Chaucer avoids targeting any specific audience or social class of readers, focusing instead on the characters of the story and writing their tales with a skill proportional to their social status and learning. However, even the lowest characters, such as the Miller, show surprising rhetorical ability, although their subject matter is more lowbrow. Vocabulary also plays an important part, as those of the higher classes refer to a woman as a "lady '', while the lower classes use the word "wenche '', with no exceptions. At times the same word will mean entirely different things between classes. The word "pitee '', for example, is a noble concept to the upper classes, while in the Merchant 's Tale it refers to sexual intercourse. Again, however, tales such as the Nun 's Priest 's Tale show surprising skill with words among the lower classes of the group, while the Knight 's Tale is at times extremely simple.
Chaucer uses the same meter throughout almost all of his tales, with the exception of Sir Thopas and his prose tales. It is a decasyllable line, probably borrowed from French and Italian forms, with riding rhyme and, occasionally, a caesura in the middle of a line. His meter would later develop into the heroic meter of the 15th and 16th centuries and is an ancestor of iambic pentameter. He avoids allowing couplets to become too prominent in the poem, and four of the tales (the Man of Law 's, Clerk 's, Prioress ', and Second Nun 's) use rhyme royal.
The Canterbury Tales was written during a turbulent time in English history. The Catholic Church was in the midst of the Western Schism and, though it was still the only Christian authority in Europe, was the subject of heavy controversy. Lollardy, an early English religious movement led by John Wycliffe, is mentioned in the Tales, which also mention a specific incident involving pardoners (sellers of indulgences, which were believed to relieve the temporal punishment due for sins that were already forgiven in the Sacrament of Confession) who nefariously claimed to be collecting for St. Mary Rouncesval hospital in England. The Canterbury Tales is among the first English literary works to mention paper, a relatively new invention that allowed dissemination of the written word never before seen in England. Political clashes, such as the 1381 Peasants ' Revolt and clashes ending in the deposing of King Richard II, further reveal the complex turmoil surrounding Chaucer in the time of the Tales ' writing. Many of his close friends were executed and he himself moved to Kent to get away from events in London.
While some readers look to interpret the characters of The Canterbury Tales as historical figures, other readers choose to interpret its significance in less literal terms. After analysis of Chaucer 's diction and historical context, his work appears to develop a critique of society during his lifetime. Within a number of his descriptions, his comments can appear complimentary in nature, but through clever language, the statements are ultimately critical of the pilgrim 's actions. It is unclear whether Chaucer would intend for the reader to link his characters with actual persons. Instead, it appears that Chaucer creates fictional characters to be general representations of people in such fields of work. With an understanding of medieval society, one can detect subtle satire at work.
The Tales reflect diverse views of the Church in Chaucer 's England. After the Black Death, many Europeans began to question the authority of the established Church. Some turned to lollardy, while others chose less extreme paths, starting new monastic orders or smaller movements exposing church corruption in the behaviour of the clergy, false church relics or abuse of indulgences. Several characters in the Tales are religious figures, and the very setting of the pilgrimage to Canterbury is religious (although the prologue comments ironically on its merely seasonal attractions), making religion a significant theme of the work.
Two characters, the Pardoner and the Summoner, whose roles apply the Church 's secular power, are both portrayed as deeply corrupt, greedy, and abusive. Pardoners in Chaucer 's day were those people from whom one bought Church "indulgences '' for forgiveness of sins, who were guilty of abusing their office for their own gain. Chaucer 's Pardoner openly admits the corruption of his practice while hawking his wares. Summoners were Church officers who brought sinners to the Church court for possible excommunication and other penalties. Corrupt summoners would write false citations and frighten people into bribing them to protect their interests. Chaucer 's Summoner is portrayed as guilty of the very kinds of sins for which he is threatening to bring others to court, and is hinted as having a corrupt relationship with the Pardoner. In The Friar 's Tale, one of the characters is a summoner who is shown to be working on the side of the devil, not God.
Churchmen of various kinds are represented by the Monk, the Prioress, the Nun 's Priest, and the Second Nun. Monastic orders, which originated from a desire to follow an ascetic lifestyle separated from the world, had by Chaucer 's time become increasingly entangled in worldly matters. Monasteries frequently controlled huge tracts of land on which they made significant sums of money, while peasants worked in their employ. The Second Nun is an example of what a Nun was expected to be: her tale is about a woman whose chaste example brings people into the church. The Monk and the Prioress, on the other hand, while not as corrupt as the Summoner or Pardoner, fall far short of the ideal for their orders. Both are expensively dressed, show signs of lives of luxury and flirtatiousness and show a lack of spiritual depth. The Prioress 's Tale is an account of Jews murdering a deeply pious and innocent Christian boy, a blood libel against Jews that became a part of English literary tradition. The story did not originate in the works of Chaucer and was well known in the 14th century.
Pilgrimage was a very prominent feature of medieval society. The ultimate pilgrimage destination was Jerusalem, but within England Canterbury was a popular destination. Pilgrims would journey to cathedrals that preserved relics of saints, believing that such relics held miraculous powers. Saint Thomas Becket, Archbishop of Canterbury, had been murdered in Canterbury Cathedral by knights of Henry II during a disagreement between Church and Crown. Miracle stories connected to his remains sprang up soon after his death, and the cathedral became a popular pilgrimage destination. The pilgrimage in the work ties all of the stories together and may be considered a representation of Christians ' striving for heaven, despite weaknesses, disagreement, and diversity of opinion.
The upper class or nobility, represented chiefly by the Knight and his Squire, was in Chaucer 's time steeped in a culture of chivalry and courtliness. Nobles were expected to be powerful warriors who could be ruthless on the battlefield yet mannerly in the King 's Court and Christian in their actions. Knights were expected to form a strong social bond with the men who fought alongside them, but an even stronger bond with a woman whom they idealised to strengthen their fighting ability. Though the aim of chivalry was to noble action, its conflicting values often degenerated into violence. Church leaders frequently tried to place restrictions on jousts and tournaments, which at times ended in the death of the loser. The Knight 's Tale shows how the brotherly love of two fellow knights turns into a deadly feud at the sight of a woman whom both idealise. To win her, both are willing to fight to the death. Chivalry was in Chaucer 's day on the decline, and it is possible that The Knight 's Tale was intended to show its flaws, although this is disputed. Chaucer himself had fought in the Hundred Years ' War under Edward III, who heavily emphasised chivalry during his reign. Two tales, Sir Topas and The Tale of Melibee are told by Chaucer himself, who is travelling with the pilgrims in his own story. Both tales seem to focus on the ill - effects of chivalry -- the first making fun of chivalric rules and the second warning against violence.
The Tales constantly reflect the conflict between classes. For example, the division of the three estates: the characters are all divided into three distinct classes, the classes being "those who pray '' (the clergy), "those who fight '' (the nobility), and "those who work '' (the commoners and peasantry). Most of the tales are interlinked by common themes, and some "quit '' (reply to or retaliate against) other tales. Convention is followed when the Knight begins the game with a tale, as he represents the highest social class in the group. But when he is followed by the Miller, who represents a lower class, it sets the stage for the Tales to reflect both a respect for and a disregard for upper class rules. Helen Cooper, as well as Mikhail Bakhtin and Derek Brewer, call this opposition "the ordered and the grotesque, Lent and Carnival, officially approved culture and its riotous, and high - spirited underside. '' Several works of the time contained the same opposition.
Chaucer 's characters each express different -- sometimes vastly different -- views of reality, creating an atmosphere of testing, empathy, and relativism. As Helen Cooper says, "Different genres give different readings of the world: the fabliau scarcely notices the operations of God, the saint 's life focuses on those at the expense of physical reality, tracts and sermons insist on prudential or orthodox morality, romances privilege human emotion. '' The sheer number of varying persons and stories renders the Tales as a set unable to arrive at any definite truth or reality.
The concept of liminality figures prominently within The Canterbury Tales. A liminal space, which can be both geographical as well as metaphorical or spiritual, is the transitional or transformational space between a "real '' (secure, known, limited) world and an unknown or imaginary space of both risk and possibility. The notion of a pilgrimage is itself a liminal experience, because it centers on travel between destinations and because pilgrims undertake it hoping to become more holy in the process. Thus, the structure of The Canterbury Tales itself is liminal; it not only covers the distance between London and Canterbury, but the majority of the tales refer to places entirely outside the geography of the pilgrimage. Jean Jost summarises the function of liminality in The Canterbury Tales,
"Both appropriately and ironically in this raucous and subversive liminal space, a ragtag assembly gather together and tell their equally unconventional tales. In this unruly place, the rules of tale telling are established, themselves to be both disordered and broken; here the tales of game and earnest, solas and sentence, will be set and interrupted. Here the sacred and profane adventure begins, but does not end. Here, the condition of peril is as prominent as that of protection. The act of pilgrimaging itself consists of moving from one urban space, through liminal rural space, to the next urban space with an ever fluctuating series of events and narratives punctuating those spaces. The goal of pilgrimage may well be a religious or spiritual space at its conclusion, and reflect a psychological progression of the spirit, in yet another kind of emotional space. ''
Liminality is also evident in the individual tales. An obvious instance of this is the Friar 's Tale in which the yeoman devil is a liminal figure because of his transitory nature and function; it is his purpose to issue souls from their current existence to hell, an entirely different one. The Franklin 's Tale is a Breton Lai tale, which takes the tale into a liminal space by invoking not only the interaction of the supernatural and the mortal, but also the relation between the present and the imagined past.
It is sometimes argued that the greatest contribution that this work made to English literature was in popularising the literary use of the vernacular English, rather than French or Latin. English had, however, been used as a literary language for centuries before Chaucer 's life, and several of Chaucer 's contemporaries -- John Gower, William Langland, and the Pearl Poet -- also wrote major literary works in English. It is unclear to what extent Chaucer was responsible for starting a trend rather than simply being part of it. It is interesting to note that, although Chaucer had a powerful influence in poetic and artistic terms, which can be seen in the great number of forgeries and mistaken attributions (such as The Floure and the Leafe, which was translated by John Dryden), modern English spelling and orthography owe much more to the innovations made by the Court of Chancery in the decades during and after his lifetime.
While Chaucer clearly states the addressees of many of his poems (the Book of the Duchess is believed to have been written for John of Gaunt on the occasion of his wife 's death in 1368), the intended audience of The Canterbury Tales is more difficult to determine. Chaucer was a courtier, leading some to believe that he was mainly a court poet who wrote exclusively for the nobility. He is referred to as a noble translator and poet by Eustache Deschamps and by his contemporary John Gower. It has been suggested that the poem was intended to be read aloud, which is probable as this was a common activity at the time. However, it also seems to have been intended for private reading as well, since Chaucer frequently refers to himself as the writer, rather than the speaker, of the work. Determining the intended audience directly from the text is even more difficult, since the audience is part of the story. This makes it difficult to tell when Chaucer is writing to the fictional pilgrim audience or the actual reader.
Chaucer 's works may have been distributed in some form during his lifetime in part or in whole. Scholars speculate that manuscripts were circulated among his friends, but likely remained unknown to most people until after his death. However, the speed with which copyists strove to write complete versions of his tale in manuscript form shows that Chaucer was a famous and respected poet in his own day. The Hengwrt and Ellesmere manuscripts are examples of the care taken to distribute the work. More manuscript copies of the poem exist than for any other poem of its day except The Prick of Conscience, causing some scholars to give it the medieval equivalent of bestseller status. Even the most elegant of the illustrated manuscripts, however, is not nearly as highly decorated as the work of authors of more respectable works such as John Lydgate 's religious and historical literature.
John Lydgate and Thomas Occleve were among the first critics of Chaucer 's Tales, praising the poet as the greatest English poet of all time and the first to show what the language was truly capable of poetically. This sentiment was universally agreed upon by later critics into the mid-15th century. Glosses included in The Canterbury Tales manuscripts of the time praised him highly for his skill with "sentence '' and rhetoric, the two pillars by which medieval critics judged poetry. The most respected of the tales was at this time the Knight 's, as it was full of both.
The incompleteness of the Tales led several medieval authors to write additions and supplements to the tales to make them more complete. Some of the oldest existing manuscripts of the tales include new or modified tales, showing that even early on, such additions were being created. These emendations included various expansions of the Cook 's Tale, which Chaucer never finished, The Plowman 's Tale, The Tale of Gamelyn, the Siege of Thebes, and the Tale of Beryn.
The Tale of Beryn, written by an anonymous author in the 15th century, is preceded by a lengthy prologue in which the pilgrims arrive at Canterbury and their activities there are described. While the rest of the pilgrims disperse throughout the town, the Pardoner seeks the affections of Kate the barmaid, but faces problems dealing with the man in her life and the innkeeper Harry Bailey. As the pilgrims turn back home, the Merchant restarts the storytelling with Tale of Beryn. In this tale, a young man named Beryn travels from Rome to Egypt to seek his fortune only to be cheated by other businessmen there. He is then aided by a local man in getting his revenge. The tale comes from the French tale Bérinus and exists in a single early manuscript of the tales, although it was printed along with the tales in a 1721 edition by John Urry.
John Lydgate wrote The Siege of Thebes in about 1420. Like the Tale of Beryn, it is preceded by a prologue in which the pilgrims arrive in Canterbury. Lydgate places himself among the pilgrims as one of them and describes how he was a part of Chaucer 's trip and heard the stories. He characterises himself as a monk and tells a long story about the history of Thebes before the events of the Knight 's Tale. John Lydgate 's tale was popular early on and exists in old manuscripts both on its own and as part of the Tales. It was first printed as early as 1561 by John Stow, and several editions for centuries after followed suit.
There are actually two versions of The Plowman 's Tale, both of which are influenced by the story Piers Plowman, a work written during Chaucer 's lifetime. Chaucer describes a Plowman in the General Prologue of his tales, but never gives him his own tale. One tale, written by Thomas Occleve, describes the miracle of the Virgin and the Sleeveless Garment. Another tale features a pelican and a griffin debating church corruption, with the pelican taking a position of protest akin to John Wycliffe 's ideas.
The Tale of Gamelyn was included in an early manuscript version of the tales, Harley 7334, which is notorious for being one of the lower - quality early manuscripts in terms of editor error and alteration. It is now widely rejected by scholars as an authentic Chaucerian tale, although some scholars think he may have intended to rewrite the story as a tale for the Yeoman. Dates for its authorship vary from 1340 to 1370.
Many literary works (both fiction and non-fiction alike) have used a similar frame narrative to The Canterbury Tales as an homage. Science - fiction writer Dan Simmons wrote his Hugo Award winning novel Hyperion based on an extra-planetary group of pilgrims. Evolutionary biologist Richard Dawkins used The Canterbury Tales as a structure for his 2004 non-fiction book about evolution titled The Ancestor 's Tale: A Pilgrimage to the Dawn of Evolution. His animal pilgrims are on their way to find the common ancestor, each telling a tale about evolution.
Henry Dudeney 's book The Canterbury Puzzles contains a part reputedly lost from what modern readers know as Chaucer 's tales.
Historical - mystery novelist P.C. Doherty wrote a series of novels based on The Canterbury Tales, making use of both the story frame and Chaucer 's characters.
Canadian author Angie Abdou translates The Canterbury Tales to a cross section of people, all snow - sports enthusiasts but from different social backgrounds, converging on a remote back - country ski cabin in British Columbia in the 2011 novel The Canterbury Trail.
The Two Noble Kinsmen, by William Shakespeare and John Fletcher, a retelling of "The Knight 's Tale '', was first performed in 1613 or 1614 and published in 1634. In 1961, Erik Chisholm completed his opera, The Canterbury Tales. The opera is in three acts: The Wyf of Bath 's Tale, The Pardoner 's Tale and The Nun 's Priest 's Tale. Nevill Coghill 's modern English version formed the basis of a musical version that was first staged in 1964.
A Canterbury Tale, a 1944 film jointly written and directed by Michael Powell and Emeric Pressburger, is loosely based on the narrative frame of Chaucer 's tales. The movie opens with a group of medieval pilgrims journeying through the Kentish countryside as a narrator speaks the opening lines of the General Prologue. The scene then makes a now - famous transition to the time of World War II. From that point on, the film follows a group of strangers, each with his or her own story and in need of some kind of redemption, who are making their way to Canterbury together. The film 's main story takes place in an imaginary town in Kent and ends with the main characters arriving at Canterbury Cathedral, bells pealing and Chaucer 's words again resounding. A Canterbury Tale is recognised as one of the Powell - Pressburger team 's most poetic and artful films. It was produced as wartime propaganda, using Chaucer 's poetry, referring to the famous pilgrimage, and offering photography of Kent to remind the public of what made Britain worth fighting for. In one scene a local historian lectures an audience of British soldiers about the pilgrims of Chaucer 's time and the vibrant history of England.
Pier Paolo Pasolini 's 1972 film The Canterbury Tales features several of the tales, some of which keep close to the original tale and some of which are embellished. The Cook 's Tale, for instance, which is incomplete in the original version, is expanded into a full story, and the Friar 's Tale extends the scene in which the Summoner is dragged down to hell. The film includes these two tales as well as the Miller 's Tale, the Summoner 's Tale, the Wife of Bath 's Tale, and the Merchant 's Tale.
On April 26, 1986, American radio personality Garrison Keillor opened "The News from Lake Wobegon '' portion of the first live TV broadcast of his A Prairie Home Companion radio show with a reading of the original Middle English text of the General Prologue. He commented, "Although those words were written more than 600 years ago, they still describe spring. ''
Television adaptations include Alan Plater 's 1975 re-telling of the stories in a series of plays for BBC2: Trinity Tales. In 2003, BBC again featured modern re-tellings of selected tales.
The Knight
The Squire
The Reeve
The Miller
The Cook
The Wife of Bath
The Franklin
The Shipman
The Manciple
The Merchant
The Clerk of Oxford
The Sergeant of Law
The Physician
The Parson
The Monk
The Prioress
The Second Nun
The Nun 's Priest
The Friar
The Summoner
The Pardoner
The Canon Yeoman
Geoffrey Chaucer
|
when was the last time the lakers didn't make playoffs | San Antonio Spurs - wikipedia
The San Antonio Spurs are an American professional basketball team based in San Antonio, Texas. The Spurs compete in the National Basketball Association (NBA) as a member of the league 's Western Conference Southwest Division. The team plays its home games at the AT&T Center in San Antonio.
The Spurs are one of four former American Basketball Association (ABA) teams to remain intact in the NBA after the 1976 ABA -- NBA merger and the only former ABA team to have won an NBA championship. The Spurs ' five NBA championships are the fourth most in history (tied with the Golden State Warriors) behind only the Boston Celtics (17), Los Angeles Lakers (16), and Chicago Bulls (6). The Spurs currently rank first among active franchises for the highest winning percentage in NBA history, and have a winning head - to - head regular season record against every active NBA franchise.
In their 40 NBA seasons since 1976 -- 77, the Spurs have won 22 division titles. They have made the playoffs in 27 of the last 28 seasons (since 1989 -- 90) and have only missed the playoffs four times since entering the NBA; they have not missed the playoffs in the 20 seasons since Tim Duncan was drafted by the Spurs in 1997. With their 50th win in the 2016 -- 17 season, the Spurs extended their record for most consecutive 50 - win seasons to 18 (the Spurs did not win 50 games in the strike - shortened 1998 -- 99 season, which lasted only 50 games). Since the 1997 -- 98 season, the Spurs have had 20 consecutive seasons with a winning percentage of. 610 or greater during the regular season, which is also an NBA record. The team 's success during this period coincides with the tenure of current head coach Gregg Popovich, who had been the team 's general manager before replacing Bob Hill as coach in 1996.
The Spurs are the city 's only team in any of the four major U.S. professional sports leagues and the only major - league team in the city 's history to have lasted more than five years. Spurs players are active members of the San Antonio community, and many former Spurs are still active in San Antonio including David Robinson with the Carver Academy and George Gervin with the George Gervin Youth Center.
The Spurs set several NBA attendance records while playing at the Alamodome including the largest crowd ever for a NBA Finals game in 1999, and the Spurs continue to sell out the smaller AT&T Center (formerly SBC Center) on a regular basis.
Since 2003, the team has been forced on an extended road trip for much of February since the AT&T Center hosts the San Antonio Stock Show & Rodeo. This is informally known as the "Rodeo Road Trip ''. The Spurs have consistently posted winning road records during this period, including an NBA - record longest single road trip winning streak (eight games out of nine, achieved in 2003).
When the Spurs have won the NBA title, the team 's victory parades have been boat trips on the San Antonio River Walk.
The San Antonio Spurs started out as the Dallas Chaparrals of the original version of the American Basketball Association (ABA). Coached by player / coach Cliff Hagan the Dallas Chaparrals were one of 11 teams to take the floor in the inaugural season of the upstart ABA. The Chaps ' second season was a bit of a disappointment, as the team finished in 4th place with a mediocre 41 -- 37 record. In the playoffs the Chaparrals quickly fell to the New Orleans Buccaneers.
The team suffered from poor attendance and general disinterest in Dallas. In fact, during the 1970 -- 71 season, the name "Dallas '' was dropped in favor of "Texas '' and an attempt was made to make the team a regional one, playing games in Fort Worth, Texas, at the Tarrant County Convention Center, as well as Lubbock, Texas, at the Lubbock Municipal Coliseum, but this proved a failure and the team returned full - time to Dallas in time for the 1971 -- 72 season, splitting their games at Moody Coliseum and Dallas Convention Center Arena.
While the Chaparrals had been modestly successful on the court, they were sinking financially by their third season, largely because the ownership group refused to spend much money on the team. After missing the playoffs for the first time in their existence in the 1972 -- 73 season, nearly all of the owners wanted out. A group of 36 San Antonio businessmen, led by Manager / Angelo Drossos, Chairman of the Board / John Schaefer and President / Red McCombs, worked out a "lend - lease '' deal with the Dallas ownership group. Drossos and his group would lease the team for three years and move it to San Antonio, and agreed to return the team to Dallas if no purchase occurred by 1975.
After the deal was signed, the team was renamed the San Antonio Gunslingers. However, before they even played a game the name was changed to Spurs. The team 's primary colors were changed from the red, white, and blue of the Chaparrals to the now familiar black, silver and white motif of the Spurs.
In the first game at the HemisFair Arena the Spurs lost to the San Diego Conquistadors, despite attracting a noisy crowd of 6,000 fans. A smothering defense was the team 's image, as they held opponents less than 100 points an ABA record 49 times. The early Spurs were led by ABA veteran James Silas, and the team would get stronger as the season went on as they twice took advantage of the Virginia Squires, acquiring Swen Nater, who would go on to win Rookie of the Year, in November, and "The Iceman '' George Gervin in January. The ABA tried to halt the Gervin deal, claiming it was detrimental to the league, but a judge would rule in the Spurs ' favor, and Gervin made his Spurs debut on February 7. The Spurs would go on to finish with a 45 -- 39 record, good for 3rd place in the Western Division.
In the playoffs, the Spurs would battle the Indiana Pacers to the bitter end before falling in seven games. San Antonio embraced the Spurs with open arms; the Spurs drew 6,303 fans per game, surpassing the Chaparrals ' entire total attendance in only 18 games. Schaefer, Drossos and McCombs knew a runaway hit when they saw it. After only one year, they exercised their option to tear up the lease agreement, buy the franchise outright and keep the team in San Antonio for good.
The team quickly made themselves at home at HemisFair Arena, playing to increasingly large and raucous crowds. Despite a respectable 17 -- 10 start during the 1974 -- 75 season, Coach Tom Nissalke was fired as owners become tired of the Spurs ' slow defensive style of games. He would be replaced by Bob Bass, who stated that the Spurs would have an entirely new playing style: "It is my belief that you can not throw a set offense at another professional team for 48 minutes. You 've got to let them play some schoolyard basketball. '' George Gervin and James Silas took that style to heart, as the Spurs became an exciting fast breaking team on the way to a solid 51 -- 33 record, good enough for second place in the West. Gervin said, "Our whole theory was that you shoot 100 times, we 'll shoot 107. '' However, in the playoffs the Spurs would fall to the Indiana Pacers in six games.
Even though playoff success would elude the team, the Spurs had suddenly found themselves among the top teams in the ABA. Moreover, their gaudy attendance figures made them very attractive to the NBA, despite the size of the market. Although San Antonio had over 650,000 people at the time (and has since grown to become the seventh - largest city in the United States), it has always been a medium - sized market because the surrounding suburban and rural areas are not much larger than the city itself. In June 1976, the ABA -- NBA merger took place, moving San Antonio 's sole professional sports franchise into a new league. The Spurs, Denver Nuggets, Indiana Pacers and New York Nets moved to the NBA for the 1976 -- 77 season.
The Spurs and the other three ABA teams agreed to pay the owners of two other strong ABA teams that folded instead of joining the NBA. John Y. Brown, Jr., the owner of the Kentucky Colonels, received $3 million, which he used to purchase the NBA 's Buffalo Braves and later the Boston Celtics, after selling star guard Louie Dampier to the Spurs. The owners of the Spirits of St. Louis received a portion of all television profits during their NBA tenure, which amounted to roughly 1 / 7 of the Spurs ' television profit every year. This agreement placed particular financial pressure on the Spurs and the other three former ABA teams. In 2014, the Spirits ' owners reached agreement with the NBA to end the perpetual payments and take a lump sum of $500 million instead.
Although there was some initial skepticism in league circles regarding the potential success and talent levels of the incoming ABA teams, the Spurs would prove worthy of NBA inclusion during the 1976 -- 77 season with a record of 44 -- 38, good for a tie for fourth place overall in the Eastern Conference. This was done in spite of significant handicaps the NBA imposed on the incoming ABA teams, limiting their draft picks and television revenues during their early time in the merged league. They gained a new rival in the form of the Houston Rockets, who had played in Texas for five years prior to the merger.
During the 1977 -- 78 season, George Gervin and David Thompson of the Denver Nuggets battled all season for the NBA scoring title. On the final day of the season, Thompson took the lead by scoring 73 points in an afternoon game against the Detroit Pistons. That night Gervin knew that he needed 58 points against the Jazz in New Orleans. Gervin got off to a good start by scoring 20 points in the 1st quarter. In the 2nd, The Iceman was even better, setting a single period record with 33 points. Early on in the 3rd period Gervin scored his 58 points on the way to 63 capturing the scoring title. While Gervin was lighting up the scoreboard the Spurs were winning the Central Division with a 52 -- 30 record.
However, in the playoffs the Spurs would be stunned in six games by the Washington Bullets despite an outstanding series from Gervin who averaged 33.2 ppg. The following season in the 1979 Conference Finals the Spurs led the series 3 -- 1 but the Bullets came back to win the last three games and came from behind to win the 7th game 107 -- 105 handing the Spurs a heartbreaking loss. The Spurs would have to wait another 20 years to make it to their first NBA finals.
The Spurs would go on to capture five division titles in their first seven years in the NBA and became a perennial playoff participant. However, in the playoffs, the Spurs would never find a break, losing to teams like the Washington Bullets, the Boston Celtics, the Houston Rockets, and the Los Angeles Lakers.
As the 1980s progressed, the Spurs would see their shares of highs and lows. For the first few seasons of the decade, the Spurs continued their success of the 1970s with records of 52 -- 30 in 1980 -- 81, 48 -- 34 in 1981 -- 82, and 53 -- 29 in 1982 -- 83 (it was during this period that the Spurs were moved to the Western Conference).
Despite their regular season success, the Spurs were unable to win any NBA championships, losing in the Western Conference playoffs to the Houston Rockets in the first round of the 1981 and the Los Angeles Lakers in four games 1982 and in six games in the 1983 Western Finals despite getting both wins at the Forum in the 1983 series. They lost every home game in both series in 1982 and 1983 vs the Lakers as Magic Johnson, Kareem Abdul - Jabbar and co. were too strong. The Spurs did not make the conference finals until 1995.
After the 1984 -- 85 season, Gervin, who had been the Spurs ' biggest star, was traded to the Chicago Bulls in what effectively signaled the end of the era that began when the Spurs first moved to San Antonio.
The next four seasons were a dark time in Spurs ' history with the team having a combined record of 115 -- 213 from 1985 -- 86 until 1988 -- 89. The losing seasons and dwindling attendance often caused the Spurs to be mentioned as a potential candidate for relocation to another city.
The lone bright spot during this period was the Spurs being awarded the top pick in the 1987 NBA draft through NBA Draft Lottery. The Spurs used this selection on United States Naval Academy standout David Robinson. Although drafted in 1987, the Spurs would have to wait until the 1989 -- 90 season to see Robinson actually play, due to a two - year commitment he had to serve with the United States Navy.
The Spurs seemingly bottomed out in 1988 -- 89 with a record of 21 -- 61, the worst in franchise history at the time. However, the 1989 -- 90 season was notable for several reasons. It was the first season of full ownership for Red McCombs, who was an original investor in the team and helped solidify local ownership for the team. Additionally, the 1988 -- 89 season featured the debut of Larry Brown as the Spurs head coach who moved to San Antonio after winning the NCAA National Championship with Kansas in 1988.
Although there was speculation that Robinson might choose not to sign with the Spurs and to become a free agent once his Navy commitment ended, Robinson decided in the end to come to San Antonio for the 1989 -- 90 season.
While it was thought his arrival would make the Spurs respectable again, no one expected what happened in his rookie season. Led by Robinson, 1989 draftee Sean Elliott from Arizona, and trade acquisition Terry Cummings from the Milwaukee Bucks, the Spurs achieved the biggest one - season turnaround in NBA history, finishing with a record of 56 -- 26. They also jumped all the way to first place in the Midwest Division, their first division title in seven years. Robinson had one of the most successful rookie seasons for a center in NBA history, finishing the season as the unanimous Rookie of the Year while averaging 24.3 points and 12.0 rebounds.
The Spurs began the 1990s with great optimism. The team became a perennial playoff presence, although unable to advance further than the second round of the NBA Playoffs under Brown 's tutelage. Late in the 1991 -- 92 season, McCombs fired Brown and replaced him with Bob Bass for the remainder of the season. Without a healthy David Robinson, the Spurs were swept out of the first round of the playoffs by the Phoenix Suns.
McCombs made national headlines during the summer of 1992 with the hiring of former UNLV head coach Jerry Tarkanian. The Tarkanian experiment proved a flop, as the coach was fired 20 games into the 1992 -- 93 season with the Spurs record at 9 -- 11. After Rex Hughes filled the coaching shoes for one game, NBA veteran John Lucas was named head coach. It was Lucas ' first NBA coaching assignment, although he had gained recognition in league circles for his success in helping NBA players rehab from drug abuse. The Lucas era started out successfully. His coaching propelled the team to a 39 -- 22 finish over the rest of the regular season, and the team reached the Western Conference semi-finals.
In 1993, local businessman Peter M. Holt and a group of 22 investors purchased the Spurs from Red McCombs for $75 million. In the 1993 -- 94 season, the Spurs ' first in the newly built Alamodome, Lucas led the team to a 55 -- 27 record but the team suffered a loss in the first round of the playoffs, which led to the immediate firing of Lucas as head coach. Prior to the season the Spurs traded fan - favorite Elliott to the Detroit Pistons in return for rebounding star Dennis Rodman.
Lucas was replaced by former Pacers coach Bob Hill for the 1994 -- 95 season. Elliott returned to the team after an uneventful season with the Pistons, and the team finished with the NBA 's best record at 62 -- 20, cracking the 60 - win mark for the first time in franchise history. Robinson was named the league 's Most Valuable Player. The Spurs reached the Western Conference Finals, but lost to the eventual NBA Champion Houston Rockets. Throughout the season, and particularly in the playoffs, there appeared to be friction developing between Rodman and several Spurs ' teammates, most notably Robinson. Rodman was traded to the Chicago Bulls after the season, and helped the Bulls win three titles from 1996 to 1998.
The Spurs finished the 1995 -- 96 season under Hill at 59 -- 23 and lost in the Western Conference semi-finals. Few observers could have predicted how far the Spurs would fall during the 1996 -- 97 season, especially with the signing of Dominique Wilkins. Robinson missed the first month of the season due to a back injury. He returned in December, but played only six games before a broken foot sidelined him for the rest of the season. Elliott also missed more than half the season due to injury. Without Robinson and Elliott, the Spurs were a rudderless team. The lone bright spot was Wilkins, leading the team in scoring with an average of 18.2 ppg. The Spurs ended the season with a 20 -- 62 record, the worst in franchise history -- and to date, the last time they have missed the playoffs. Hill only lasted 18 games as coach that season, eventually being fired and replaced by general manager Gregg Popovich, who had also served a stint under Brown as an assistant coach. Wilkins would play his lone season in 1996 -- 97 for San Antonio, knowing his minutes and playing time would greatly diminish next season.
As disastrous as the 1996 -- 97 season was for the Spurs, the offseason proved to be the opposite. With the third - worst record in the league, the Spurs won the NBA 's draft lottery, which gave them the top pick in the 1997 draft. The Spurs used their pick to select Wake Forest product and consensus All - American Tim Duncan.
Gregg Popovich watched Tim Duncan play in summer league and immediately noted that, "If I try to coach this guy, the only thing I can do is screw him up '', effectively saying that Duncan was very gifted, very intelligent, and had a keen knowledge of the game.
Duncan quickly emerged as a dominant force in the NBA during the 1997 -- 98 season, averaging 21.1 points and 11.9 rebounds per game as a power forward. He was named First Team All - NBA while winning Rookie of the Year honors. The team ended up at 56 -- 26, breaking their own record from 1989 -- 90 for the biggest single - season improvement for wins, but once again lost to the Jazz in the Western Conference semi-finals. While both Duncan and Robinson played low - post roles, the two seamlessly meshed on the court.
With a healthy Robinson and Duncan and the additions of playoff veterans such as Mario Elie and Jerome Kersey, the Spurs looked forward to the 1998 -- 99 season. Prior to the beginning of training camps, however, the NBA owners, led by commissioner David Stern, locked out the players in order to force a new collective bargaining agreement with the National Basketball Players Association (NBPA). The season was delayed for over three months until resolution on a new labor agreement was reached in January 1999.
Playing a shortened 50 - game season, the Spurs earned an NBA - best 37 -- 13 record (. 740 win percentage, and only season since Duncan was drafted the Spurs did not win at least 50 games in a season as of 2016). The team was just as dominant in the playoffs, rolling through the Western Conference with a record of 11 -- 1. In the NBA Finals, they faced the New York Knicks, who had made history by becoming the first eighth seed to ever make the NBA Finals. The Spurs won the series and the franchise 's first NBA Championship in Game 5 at the Knicks ' home arena, Madison Square Garden. Duncan was named the NBA Finals MVP. The Spurs became the first former ABA team to reach and to win the NBA Finals. They also won the 1999 McDonald 's Championship in the off - season and were the last champions of that tournament as it was disbanded right after.
Coming off their first NBA Championship, the Spurs were still among the best teams in the West and battling for first place in the Midwest Division during the 1999 -- 2000 season. On March 14, the Spurs playoff spirits got a lift when Sean Elliott, who received a kidney transplant from his brother prior to the season, returned and played in the last 19 games. As the season wound down, Duncan suffered a knee injury and the Spurs finished in second place with a 53 -- 29 record. Without Duncan, the Spurs were knocked out of the playoffs by the Phoenix Suns in four games.
The long - term viability of the Spurs franchise in San Antonio was, however, achieved during the 1999 -- 2000 season, as Bexar County voters approved increases in car rental and hotel taxes which would allow for the construction of a new arena next to the Freeman Coliseum. The Spurs finished with 58 -- 24 records for both the 2000 -- 01 and 2001 -- 02 seasons but found themselves suffering playoff ousters in both seasons from the eventual NBA Champion Los Angeles Lakers, getting swept from the 2001 Conference Finals and losing in five games during the second round in 2002.
Entering the 2002 -- 03 season, the team knew it would be memorable for at least two reasons, as David Robinson announced that it would be his last in the NBA and the Spurs would begin play at their new arena, the SBC Center, named after telecommunications giant SBC, whose corporate headquarters were located in San Antonio (SBC became AT&T after its acquisition of its former parent company). To mark this occasion, the Spurs revamped their "Fiesta Colors '' logo and reverted to the familiar silver and black motif (though, during the time of the Fiesta logo, the uniform remained silver and black). This version of the Spurs was very different from the team that had won the title a few years earlier. Second - year French star Tony Parker, drafted by the Spurs in the first round of the 2001 NBA draft, was now the starting point guard for the Spurs. The squad featured a variety of newly acquired three - point shooters, including Stephen Jackson, Danny Ferry, Bruce Bowen, Steve Kerr, Steve Smith and Argentine product Manu Ginóbili, a 1999 second - round draft choice playing in his first NBA season.
The Spurs christened the SBC Center in style on November 1, 2002 by defeating the Toronto Raptors 91 -- 72. The Spurs would not get off to a flying start as they had just a 19 -- 13 record heading into January. In January the Spurs began to gel and seemed prepped to make a run, when they embarked on their annual Rodeo Road Trip, a nine - game road trip from January 25 to February 16. However, it would be hardly a bump in the road for the charging Spurs, who won eight of the nine and began to climb their way to first place. The Spurs went on to erase their seven - game deficit and finished the season in a tie with the Dallas Mavericks for the best record in the NBA (60 -- 22). Thanks to a tiebreaker, the Spurs won their third straight Division title as Tim Duncan claimed his second straight NBA MVP.
In the playoffs, the Spurs defeated the Suns, Lakers and Mavericks en route to facing the New Jersey Nets in the NBA Finals. The series against the Nets marked the first time two former ABA teams played each other for the NBA Championship. The Spurs won the series 4 -- 2, giving them their second NBA Championship in franchise history. Duncan, after having been named NBA MVP, was also named Finals MVP.
Coming off their second NBA Championship, the retirement of David Robinson left a void in San Antonio 's daunting defense, while playoff hero Steve Kerr and veteran forward Danny Ferry also retired. Meanwhile, backup point guard Speedy Claxton was left for the Warriors, and Stephen Jackson left for Atlanta. With several holes to fill in their rotation, the Spurs would make several key signings in the off - season. Rasho Nesterovic and Hedo Türkoğlu were brought in to replace Robinson and Jackson, respectively. What proved to be the most important off - season acquisition would be the signing of veteran Robert Horry.
The Spurs, playing with nine new players, struggled early as they missed the presence of Robinson while the new players struggled to fit in, as they held a 9 -- 10 record on December 3. However, the Spurs would turn it around, as they ended December on a 13 - game winning streak and quickly climbed back to the top of the NBA standings. They would battle all year for the top spot in the Western Conference, as they ended the season on another strong note winning their final 11 games. However, they would fall one game short of a division title and the best record in the West, posting a record of 57 -- 25. In the second round of the playoffs, the Spurs found themselves in another showdown with the Los Angeles Lakers. The Spurs would win Games 1 and 2 at home, but drop the next two in the Los Angeles. In Game 5 back in San Antonio, Duncan seemingly delivered the Spurs a 73 -- 72 win as he hit a dramatic shot with just 0.4 seconds remaining. However, the Lakers ' Derek Fisher would launch a game - winner as time expired, giving the Lakers a stunning 74 -- 73 win to take a 3 -- 2 series lead. The Spurs would eventually lose the series in six games.
After their disappointing second round collapse, the Spurs looked to regain the NBA crown. With the acquisition of guard Brent Barry from Seattle, the Spurs would get off to a quick start, posting a 12 -- 3 record in November. The Spurs would stay hot through December as they established a 25 -- 6 record entering the New Year. With the later additions of center Nazr Mohammed from New York (acquired in a midseason trade of Malik Rose), and veteran forward Glenn Robinson from free agency, alongside regulars Bruce Bowen, Robert Horry, Tony Parker, Manu Ginóbili, and Tim Duncan, the Spurs would be near the top in the Western Conference all season, battling the Phoenix Suns for the best record in the NBA. Just as it appeared the Spurs would cruise toward the playoffs their season suddenly hit a bump in the road when Tim Duncan suffered an ankle injury. The Spurs struggled the rest of the season, finishing just 59 -- 23. However, by the time the playoffs rolled around, Duncan was ready to return.
In the postseason, The Spurs went through the West relatively easily, culminating with a 5 - game victory in the Conference Finals over the Phoenix Suns. In the NBA Finals, the Spurs would face the defending champion Detroit Pistons. The first two games in San Antonio were both Spurs victories as Ginóbili led the way with 26 and 27 points, in blow out wins by the Spurs. However, as the series shifted to Detroit, the Spurs were the ones who were blown out, losing Games 3 and 4 by big margins as the Pistons evened the series. Faced with a third straight loss in Detroit, the Spurs would play tougher in Game 5, which would go into overtime. After going scoreless in the first half, Robert Horry hit a clutch three - point shot with nine seconds remaining to give the Spurs a dramatic 96 -- 95 win. The series moved back to San Antonio for game six, but the Spurs were unable to close out the series, setting up a deciding Game 7. In Game 7, Duncan had 25 points as the Spurs pulled away late to win their third NBA Title in seven years with an 81 -- 74 win. Duncan was named Finals MVP, becoming the fourth player to win the MVP award three times (joining Magic Johnson, Shaquille O'Neal, and Michael Jordan).
Coming off their third NBA Championship in seven years, there was a sense that the Spurs were the class of the NBA, and once again would be the team to beat in the NBA for the Championship. For the 2005 -- 06 season, the Spurs acquired the two - time All - Star Michael Finley and one - time All Star Nick Van Exel. Not surprisingly, the Spurs would come flying out of the gate, winning 16 of their first 19 games. Once again, the Spurs would get challenged within their own division by the Dallas Mavericks as they held the two best records in the Western Conference all season, battling for first place. In the end, the experience of the Spurs would be the difference as they won the Southwest Division again with a new franchise best record of 63 -- 19. The Spurs met the Mavericks in the second round of the playoffs, but it would be Dallas coming out on top 4 -- 3, including a 119 -- 111 overtime victory in Game 7.
The Spurs struggled during the first half of the 2006 -- 07 season, which led to discussions of trading away veteran players to build for the future. The team remained intact, and the Spurs would win 13 games in a row during February and March, and were an NBA - best 25 -- 6 in the final 31 games, as the Spurs were able to claim the 3 - seed in the West. The Spurs cruised through the first round, while the # 1 - seeded Dallas Mavericks were upset. This set up a second - round series with the Phoenix Suns as the key series in the entire NBA Playoffs, as this series featured the teams with the two best records remaining in the NBA.
The Spurs went on to win 4 -- 2 in the contentious and controversial series versus the Suns. The series featured a Robert Horry foul on Steve Nash toward the end of Game 4 which resulted in Horry being suspended for two games. Those who said the second - round series against the Suns was the true NBA Finals would be proven right, as the Spurs easily dispatched the Utah Jazz in five games to reach the NBA Finals. In the Finals, the Spurs swept the Cleveland Cavaliers and captured their fourth title in nine years. Tony Parker, who dominated in the Finals averaging 24.5 ppg on 57 % shooting, was named Finals MVP and became the first European - born player to win the award.
The 2007 -- 2008 season saw the Spurs go 56 -- 26 and finish 3rd in the Western Conference. The Spurs faced hurdles but would make it to the Western Conference Finals, but lose to the Lakers in five games. The next season would see the Spurs drop off in wins to 54 -- 28 and lose to the Dallas Mavericks in the first round of the playoffs.
Two days before the 2009 NBA draft, general manager R.C. Buford acted to address the team 's age and health concerns by acquiring 29 - year - old swingman Richard Jefferson from the Milwaukee Bucks. The Spurs sent 38 - year - old Bruce Bowen, 36 - year - old Kurt Thomas, and 34 - year - old Fabricio Oberto to the Bucks, who swapped Oberto to the Detroit Pistons for Amir Johnson.
The Spurs held three 2nd - round picks in the 2009 draft. Their selection of Pittsburgh Panthers forward DeJuan Blair with the # 37 pick was described as a "steal '' by analysts; the Spurs later drafted two guards they had been targeting with the No. 37 pick, taking Miami Hurricanes shooting guard Jack McClinton and point / shooting guard Nando de Colo from France with the No. 51 and No. 53 picks, respectively. On July 10, 2009, the Spurs signed Detroit Pistons power forward Antonio McDyess to a three - year deal worth approximately $15 million in guaranteed money.
The Spurs struggled with injuries during the 2009 -- 10 regular season, but managed another 50 - win season, finishing at 50 -- 32. The seventh - seeded Spurs would once again battle the Mavericks in the first round of the playoffs. After falling to the Mavericks in Game 1, the Spurs went on to avenge their 2009 defeat to Dallas by winning the series in six games. The Spurs however, were swept out of the playoffs in the following round by the Phoenix Suns.
During the 2010 NBA draft, the Spurs management held the highest draft pick since the Tim Duncan draft a decade earlier. They drafted rookie James Anderson from Oklahoma State at # 20. However, Anderson was soon sitting out of the first half of the season due to injuries. In 2010 -- 11, the Spurs finished 61 -- 21 to be the # 1 seed, but an injury to Ginóbili in the final regular season game took a toll on the team, and they were upset by the # 8 seeded Memphis Grizzlies.
2011 brought a change to the Spurs ' philosophy that set the stage for the next successful run in the club 's history. Out went the stream of last - legs, wizened veterans that the Spurs had relied on to fill out the rotation behind the Big Three. Minutes went to younger and more athletic talent like Danny Green, Gary Neal, and Tiago Splitter, to whom Popovich would teach The Spurs ' Way -- a fast pace, unselfish passing, and accountability on defense. The biggest personnel move of the Spurs ' off - season had the club sending the beloved guard George Hill to his hometown Indiana Pacers for San Diego State 's Kawhi Leonard, a hyper - athletic forward selected # 15 overall by the Pacers in the 2011 NBA draft. The team also selected Texas Longhorns ' Cory Joseph as the # 29 overall pick.
After the lockout that delayed the 2011 -- 2012 season, the Spurs signed T.J. Ford, who would eventually retire in the middle of the season after playing only 14 games due to a stinger. Before the trade deadline, the Spurs decided to part ways with Richard Jefferson and sent him to the Golden State Warriors for Stephen Jackson, who had been a member of the 2003 championship team. Leonard then became the starting small forward. In the week following the trade deadline, the Spurs also signed forward Boris Diaw after his contract was bought out by the Charlotte Bobcats, and former Portland Trail Blazers guard Patrick Mills who played for the Xinjiang Flying Tigers in the CBA during the lockout. This gave the Spurs a deeper bench for their playoff run.
Despite the shortened 66 - game NBA season due to the NBA lockout, the Spurs won 50 games and tied the Chicago Bulls for the best record in the league. They extended their streak of 50 + win seasons to 13 since the 1999 -- 2000 season, an NBA record. Popovich won his second Coach of the Year.
The Spurs swept the first two rounds of the Playoffs. With those two sweeps, a 10 - game win streak to end the season, and wins in Games 1 and 2 of the Western Conference Finals, the Spurs would win 20 straight games. However, the Oklahoma City Thunder would end up winning the next four games in the West Finals, to take the series 4 -- 2.
During the 2012 off - season, the Spurs re-signed swingman Danny Green, who was a welcome surprise for them from the previous season, and Tim Duncan, both for three years. The Spurs would have a strong 2012 -- 13 season, going 58 -- 24 and earning the # 2 seed in the West.
The Spurs clinched the playoffs for a 16th consecutive season, as well as extended the NBA record with 50 + games for 14 consecutive seasons. On April 16, the Spurs signed two - time scoring champion, and seven - time All - Star Tracy McGrady to help in the playoffs after waiving Stephen Jackson. The Spurs finished the regular season second in the Western Conference behind the Oklahoma City Thunder with a record of 58 -- 24, and swept the Los Angeles Lakers in the first round, 4 -- 0. In the second round of the 2013 playoffs, the Spurs faced Stephen Curry and the Golden State Warriors. They beat the Warriors four games to two. In the conference finals, the Spurs swept the Memphis Grizzlies, with Tony Parker having an 18 - assist performance in Game 2 and a 37 - point performance in Game 4. The Spurs would meet the defending champion Miami Heat in the NBA Finals.
The Spurs and Heat would alternate wins the first six games in the series. In Game 6, the Spurs were on the verge of winning their fifth NBA title. San Antonio was up five points with 28 seconds to go in regulation. An unlikely and uncharacteristic series of mishaps would doom the Spurs down the stretch, including the benching of Duncan by Popovich at the end of regulation with the Spurs on defense. The Heat missed their field goal attempt, but the undersized Spurs could not grab the defensive rebound. Chris Bosh rebounded the ball and Ray Allen then hit a 3 - pointer to tie the game with five seconds left in regulation to send it to overtime, during which the Spurs were defeated 103 -- 100. In Game 7, San Antonio jumped out to a lead early and kept the game close the entire way. Toward the end of the game, however, and despite a 24 - point, 12 rebound effort, Duncan failed to convert on two attempts to tie the game: a missed layup and missed tip - in that allowed LeBron James to hit a jumper and increase the Heat 's lead to 92 -- 88. After a steal from Ginóbili, James hit two free throws after being fouled by Duncan, and when Ginóbili missed a subsequent 3 - pointer, Dwyane Wade hit one out of two from the free throw line to put the game on ice, as the Heat would win their second straight championship.
The Spurs returned with their core roster largely intact, adding free agents Marco Belinelli and Jeff Ayres (formerly Jeff Pendergraph) while losing Gary Neal to the Milwaukee Bucks. The Spurs clinched the best record in the NBA with 62 wins, which included a franchise record 19 straight wins in February and March. In the first round of the playoffs, the eighth - seeded Dallas Mavericks surprised the Spurs by taking the series to 7 games, but the Spurs prevailed in convincing fashion in the deciding Game 7. In the second round, Tim Duncan surpassed Karl Malone for fifth place in NBA Playoffs all - time points scored while the Spurs cruised past the Portland Trail Blazers in 5 games. San Antonio played the Oklahoma City Thunder in the Western Conference Finals, which marked the third straight appearance in the Western Conference Finals for the Spurs, and defeated them in 6 games to advance to the Finals for a second straight year for a rematch with the Miami Heat. It was also the first time that they had advanced to the Finals in consecutive years. This made it the first time since 1998 NBA Finals that the same two teams faced off in the Finals in consecutive years. With a victory in the second game of the series, Duncan, Ginóbili, and Parker won more playoff games together than any other three players on the same team in NBA history. The Spurs would go on to win the 2014 NBA Championship, 4 games to 1. The Spurs blew out Miami in all of their wins, each of them by 15 or more points. Kawhi Leonard had a breakout performance and was named NBA Finals MVP for his big game performance and is the third youngest to win it, behind Magic Johnson and teammate Duncan. In the 2014 NBA draft, they selected Kyle Anderson out of UCLA as the 30th overall pick.
During the 2014 offseason, the Spurs made headlines when they announced that they had hired Becky Hammon as an assistant coach, effective with her retirement as a player at the end of the 2014 WNBA season. Hammon became the first full - time female coach in any of the four major U.S. professional leagues.
The 2014 -- 15 season was an up - and - down season, but finishing strong with a 55 -- 27 regular season record and 6th seed in the West, they qualified for the playoffs. They faced the Los Angeles Clippers in the first round of the playoffs. The Spurs went up 3 -- 2 heading into Game 6 at San Antonio. However, the Clippers would win that game and go on to win Game 7 at home. The San Antonio Spurs became the first defending champions since the 2011 -- 12 Dallas Mavericks to be eliminated in the first round of the NBA playoffs.
With the acquisitions of David West and LaMarcus Aldridge during the offseason, the Spurs finished with a 67 -- 15 record, their best winning percentage in franchise history, earning them the Southwest Division title. They also set a franchise record for most wins in a season with 67 and a NBA record for most home wins in a season with 40 (tying the 1985 -- 86 Boston Celtics 40 -- 1 home record). The Spurs also had the league 's best defense. During the playoffs they swept the shorthanded Memphis Grizzles in the first round before losing to the Oklahoma City Thunder in 6 games in the second round. They would become the first team since the 2006 -- 07 Dallas Mavericks to finish with 67 wins and be eliminated before the conference finals.
On July 11, 2016, Duncan announced his retirement from the NBA after 19 seasons with the Spurs.
Despite the loss of longtime captain Tim Duncan, the Spurs, now led by Kawhi Leonard, remained a perennial playoff contender and finished with a record of 61 -- 21. After defeating the Grizzlies and the Rockets in the first two rounds, the Spurs ended the season with a four - game sweep to the eventual champions Golden State Warriors, with injuries to Leonard, Parker, and David Lee in the playoffs. In the following offseason, the Spurs re-signed Aldridge and Pau Gasol, signed Rudy Gay, but lost Dewayne Dedmon and Jonathon Simmons to free agency.
The rivalry between the San Antonio Spurs and the Los Angeles Lakers started in the late 1970s and peaked in the late 1990s and early 2000s. Since 1999, the teams have met in the NBA Playoffs 7 times, with the clubs combining to appear in seven straight NBA Finals from 1999 to 2005. Additionally, the teams won each NBA Title from 1999 to 2003 (the Spurs won in 1999 and 2003, while the Lakers won in 2000, 2001, and 2002). From 1999 to 2004, the rivalry was considered as the NBA 's best, as each time the clubs faced each other in the playoffs, the winner advanced to the NBA Finals. The rivalry fell off from 2005 to 2007, with the Lakers missing the playoffs in 2005 and losing in the first round to the Phoenix Suns in 2006 and 2007, but intensified again in 2008 when they met in the Western Conference Finals. Both teams met once again for the 12th time in 2013 in the first round, with the Spurs winning in four games.
The rivalry between the San Antonio Spurs and the Dallas Mavericks features two teams with Dallas roots. The Spurs began their life in the ABA as the Dallas Chaparrals and did not move to San Antonio until 1973. On October 11, 1980, the Mavs made their NBA debut by defeating the Spurs 103 -- 92. In the playoffs the Spurs defeated the Mavericks in 2001, 2003, 2010, and 2014; while the Mavericks defeated the Spurs in 2006 and 2009. The Spurs have won five championships and six conference titles, while the Mavericks have won one championship and two conference titles. The Spurs have won 18 division titles, while the Mavericks have won 3. The Mavericks have three 60 - win seasons, while the Spurs have five.
The two teams met in the playoffs during the 2000 -- 2001 season with the Spurs winning in five games. Little was made during this series, as the Spurs won their first NBA championship only two years before. The Mavericks, run by a trio of Steve Nash, Michael Finley, and Dirk Nowitzki, had just defeated the Utah Jazz, despite not having home court advantage, and were only starting to meld into a title contender.
The rivalry took on a new meaning in 2005 when, near the end of the regular season, Don Nelson resigned as Dallas ' head coach, apparently dissatisfied with the state of the team, and handed the coaching reins to former Spur Avery Johnson, the point guard of the 1999 NBA champion Spurs team who hit the game - winning shot against the New York Knicks. Since Johnson was coached under Spurs Head Coach Gregg Popovich, he would be familiar with most, if not all, of Popovich 's coaching style and philosophy. During the 2005 offseason, Michael Finley, waived by the Mavericks under the amnesty clause, joined the Spurs in search for an elusive title victory, that he finally was part of with the Spurs in 2007. Ironically, part of his salary was being paid for by the Mavericks during that season to allow for cap room.
The Mavericks were swept in the 2012 -- 13 season by the Spurs for the first time since the 1998 season, Tim Duncan 's rookie season. In their last match up of the season, San Antonio escaped with a 95 -- 94 victory over Dallas when a Vince Carter 3 - point attempt bounced off the rim at the buzzer. With that win, the Spurs clinched a playoff spot for a 16th straight season, currently the longest streak in the NBA. San Antonio also reached 50 wins for a 14th consecutive season, the longest streak in NBA history.
In the 2013 -- 14 season, the Spurs once again swept the Mavs in the regular season, giving them nine straight victories. In addition, an overtime loss to the Memphis Grizzlies on April 16, 2014 ensured that the Mavericks would face the Spurs once again in the 2014 NBA Playoffs, where the Mavs would be the eighth seed and San Antonio the first. In Game 1 in San Antonio, the game was relatively close. Dallas managed to reach an 81 -- 71 lead in the fourth quarter, but the Spurs rallied back and took Game 1 at home, 90 -- 85. However, the Mavs managed to force 22 turnovers in Game 2 to rout the Spurs 113 -- 92, splitting the first two games before the series went to Dallas. In Game 3, Manu Ginóbili managed to hit a shot that put the Spurs up 108 -- 106 with 1.7 left, but a buzzer - beating three - pointer by Vince Carter gave the Mavs the victory, putting them up 2 -- 1 in the series. The Spurs took Game 4 in Dallas 93 -- 89 and later Game 5 at home 109 -- 103, giving them a 3 -- 2 lead. The Mavs avoided elimination in Game 6 at home by rallying in the fourth quarter, winning 111 -- 113. The Spurs would then advance to the second round with a Game 7 blowout, winning 119 -- 96.
Also known as the I - 10 Rivalry since San Antonio and Houston both lie on the path of the Interstate 10 freeway. The rivalry between the San Antonio Spurs and the Houston Rockets began in 1976 when the Spurs were absorbed into the NBA from the American Basketball Association, along with the Denver Nuggets, the New York Nets, and the Indiana Pacers. The Rockets and Spurs competed for the division title, with the Rockets winning it first in 1977 and the Spurs in 1978 and 1979. In 1980, they met in the playoffs for the first time as the Rockets led by Moses Malone and Calvin Murphy beat the Spurs led by George Gervin and James Silas 2 -- 1. The rivalry grew intense as both teams moved from the East to the West. They met again in 1981, this time in the second round. The Spurs had home - court advantage, and were heavily favored, winning the Midwest Division Title and the Rockets only 40 -- 42. The Rockets and Spurs fought to the bitter end before the Rockets held on to win Game 7 capped by Murphy 's 42 points. The Rockets would advance to the Finals in a losing cause to the Boston Celtics. The rivalry continued in 1995 when the defending champion Rockets led by Hakeem Olajuwon beat the top - seeded Spurs led by MVP David Robinson in the Western Conference Finals, despite only being the sixth seed, with Olajuwon, who had won the previous year 's MVP, being widely regarded as having outplayed Robinson. In a regular season game early in the 2005 season, the Spurs were leading the Rockets very late in the fourth quarter by eight points. Houston 's Tracy McGrady went on a personal 13 -- 4 run in the final 35 seconds to miraculously steal the game away from San Antonio, including the game winning 3 pointer with one second remaining to the delight of the Toyota Center crowd.
The rivalry was renewed in the 2017 Playoffs, in which the two teams met in the Western Conference Semifinals. The match - up was the first between the two teams in the playoffs since the 1995 Western Conference Finals. Five of the six games in the series resulted in blowouts. In game 2 of the series, starting point guard Tony Parker suffered a ruptured quadriceps tendon, forcing him to miss the remainder of the playoffs. In game 5, all - star small forward Kawhi Leonard suffered an injury to his right ankle in the third quarter, resulting in him sitting out for the closing portions of the game. Despite the injury issues, the Spurs were able to send game 5 to overtime. In the overtime period, Manu Ginóbili blocked James Harden 's three - point attempt in the final seconds to secure the 110 -- 107 victory for the Spurs. The Spurs would close out the series in a game 6 blowout, 114 -- 75.
The rivalry between the Spurs and the Phoenix Suns began in the 1990s when the Spurs were led by "The Admiral '', David Robinson, and the Suns were propelled by a number of players including Dan Majerle, Kevin Johnson, and Tom Chambers. The rivalry continued into the next decade with Tim Duncan leading the Spurs and with the Suns headed by Steve Nash. In 2003, the Spurs beat the Suns 4 -- 2 in the first round. In 2005, the Spurs beat the Suns 4 -- 1 in the Conference Finals. In 2007, the Spurs beat the Suns 4 -- 2 in a Conference Semi-finals series that was notable for putting the two best teams remaining in the NBA against each other. There was a controversial ruling in game 4 when Diaw and Stoudemire left the bench during an altercation and were suspended for game 5. In 2008, the Spurs beat the Suns 4 -- 1 in the first round. Tim Duncan hit a three - pointer to force overtime and win the series. In 2010, the Suns swept the Spurs in four games in the playoffs, including a big time performance by Goran Dragic in game 3, against the team who originally drafted him, scoring 23 fourth quarter points, which amounted to 26 points in thirteen minutes. The rivalry has cooled off, with the Spurs winning most of the meetings, but that was not the case on February 21, 2014, where the Suns blew out the road - weary Spurs, who were ending the Rodeo Road Trip in Phoenix, 106 -- 85. Markieff Morris of the Suns lead all scorers with 26 points. Danny Green lead the Spurs with a low 15 points. On April 11, 2014, the Spurs clinched the league - best record, and at the same time, handing the Suns a loss that would keep them out of the postseason, winning 112 -- 104, with Danny Green setting a career high 33 points. In a pre-season game on October 16, 2014, Suns owner Robert Sarver apologized to the Phoenix crowd for the game, where the Spurs rested 5 people, including Popovich. Pop would only end his statement about the incident with, "The only thing that surprises me is that he did n't say it in a chicken suit. ''
Since becoming the San Antonio Spurs in 1973, the team colors have been black, silver and white. The distinctive logo of the word Spurs in Eurostile font, with the stylized spur substituting for the letter U, has been a part of the team 's identity since their move to San Antonio. The logo incorporated ' Fiesta colors ' of pink, orange and teal, used from 1989 to 2002 (though the uniforms remained the same), and alignment from straight to arched beginning with the 2002 -- 03 NBA season.
The Spurs have always worn black on the road and white at home, except during the 1973 -- 76 ABA seasons and their first NBA season when the home uniform was always silver. Until the 1988 -- 89 NBA season, the road uniform had "San Antonio '' on the front while the home uniform featured the team nickname adopted from the Spurs logo; from 1973 to 1982, the road uniform lettering was black with silver trim. In addition, from 1977 to 1981 a saddle - like striping was featured on the back of the home shorts. Since the 1989 -- 90 NBA season the Spurs uniform has remained practically the same, with the road uniform now using the team nickname from their logo; a minor change included the addition of another black (road) and white (home) trim to the already silver - trimmed block numbers in the 2002 -- 03 season. The Spurs wear black sneakers and socks on the road, and white sneakers and socks at home (except for select games with the silver alternates), a practice that began in the 2002 -- 03 season. When the NBA moved to the Adidas Revolution 30 technology for the 2010 -- 11 season, the Spurs changed to V - neck jerseys and eliminated striping on the shorts ' beltline.
On September 19, 2012, the Spurs unveiled a silver alternate uniform. In breaking from the traditional practice of placing the team or city name in front, the Spurs ' new uniform features only the stylized spur logo, with the black number trimmed in white and silver on the upper right. The Spurs primary logo is atop the player name and number on the back (replaced by the NBA logo prior to the 2014 -- 15 season), while the Eurostile ' SA ' initials (for San Antonio) are on the left leg of the shorts. Black, silver and white side stripes are also featured on the uniform. The uniforms are worn for select home games. A variation of this uniform, featuring military camouflage patterns instead of the usual silver, was used for two games in the 2013 -- 14 season; a sleeved version was used the next season. Another variation, this time in black, was unveiled for the 2015 -- 16 season.
At times throughout the season, the Spurs wear a jersey that says "Los Spurs '' on the front, in recognition of Latino fans both at home and across the US and Latin America. The Spurs were one of the first NBA teams to wear these branded jerseys. In 2014, the jerseys were sleeved. These events are called "Noches Latinas '', first launched during the 2006 -- 07 NBA season, part of an Hispanic marketing campaign known as "éne - bé - a ''. Six teams in the NBA participate in these events. The Spurs have had the most players from Latin America and are one of only three NBA teams who have had at least five players on their rosters who originate from Latin America and Spain (if one includes Puerto Rico as part of Latin America, although it is a U.S. territory), the others being the Memphis Grizzlies and the Portland Trail Blazers.
The switch to Nike as the uniform provider in 2017 eliminated the "home '' and "away '' uniform designations. The Spurs ' black "Icon '', silver "Statement '' and white "Association '' uniform remained identical to the previous set save for the manufacturer 's logo and updated lettering on the team name.
List of the last five seasons completed by the Spurs. For the full season - by - season history, see List of San Antonio Spurs seasons.
Note: GP = Games played, W = Wins, L = Losses, % = Winning Percentage
Dallas (Texas) Chaparrals
San Antonio Spurs
Roster Transactions Last transaction: 2017 -- 10 -- 16
The Spurs own the NBA rights to the players listed in the table below. The typical pattern is to allow the player to develop in leagues outside the United States. The player is free to negotiate contracts in other leagues and is not obligated to play in the NBA. Sometimes, a player 's overseas contract may have an expensive buyout clause that would discourage the Spurs from seeking to bring him in. The Spurs have had past success in finding foreign talent; some examples of this success include the selections of second rounder Manu Ginóbili (57th overall in 1999) and first rounder Tony Parker (28th overall in 2001), who both went on to become All - Stars.
Notes:
Notes:
Bold denotes still active with team. "Name * '' includes points scored for the team while in the ABA. Italics denotes still active but not with team.
Points scored (regular season) (as of the end of the 2016 -- 17 season)
Other Statistics (regular season) (as of the end of the 2016 -- 17 season)
NBA Most Valuable Player
NBA Finals MVP
NBA Rookie of the Year
NBA Defensive Player of the Year
NBA Sixth Man of the Year
NBA Most Improved Player Award
NBA Coach of the Year
NBA Executive of the Year
NBA Sportsmanship Award
J. Walter Kennedy Citizenship Award
Twyman -- Stokes Teammate of the Year Award
NBA scoring champion
NBA rebounding leader
NBA assists leader
NBA blocks leader
NBA steals leader
All - NBA First Team
All - NBA Second Team
All - NBA Third Team
NBA All - Defensive First Team
NBA All - Defensive Second Team
NBA All - Rookie First Team
NBA All - Rookie Second Team
ABA Rookie of the Year Award
ABA Coach of the Year Award
ABA Executive of the Year award
ABA All - Star Game Most Valuable Player Award
All - ABA First Team
All - ABA Second Team
ABA All - Rookie Team
NBA All - Star selections
NBA All - Star Game head coaches
NBA All - Star Game Most Valuable Player Award
Notes:
See also: List of museums in Central Texas
|
where do you get a cashier's check | Cashier 's check - wikipedia
A cashier 's check or cheque is a cheque guaranteed by a bank, drawn on the bank 's own funds and signed by a cashier. Cashier 's checks are treated as guaranteed funds because the bank, rather than the purchaser, is responsible for paying the amount. They are commonly required for real estate and brokerage transactions.
Cashier 's checks deposited into a bank account are usually cleared the next day. The customer can request "next - day availability '' when depositing a cashier 's check in person.
However, cashier 's checks are often forged in fraud schemes. The recipient of the check can deposit the check in his or her account, withdraw funds under next - day availability, and assume that the check is good. The check can take weeks to clear the banks. So the bank can discover that a check is fraudulent weeks after the customer has withdrawn the funds, and the customer is then legally liable for the cash already withdrawn.
A customer asks a bank for a cashier 's check, and the bank debits the amount from the customer 's account immediately, and assumes the responsibility for covering the cashier 's check. That is in contrast with a personal check, in which the bank does not debit the amount from the customer 's account until the check is deposited or cashed by the recipient.
A cashier 's check is not the same as a teller 's check, also known as a banker 's draft, which is a check provided to a customer of a bank or acquired from a bank for remittance purposes and drawn by the bank, and drawn on another bank or payable through or at a bank. A cashier 's check is also different from a certified check, which is a personal check written by the customer and drawn on the customer 's account, on which the bank certifies that the signature is genuine and that the customer has sufficient funds in the account to cover the check. Also, it should not be confused with a counter check, which is a non-personalized check provided by the bank for the convenience of a customer in making withdrawals or payments but is not guaranteed and is functionally equivalent to a personal check.
Cashier 's checks feature the name of the issuing bank in a prominent location, usually the upper left - hand corner or upper centre of the check. In addition, they are generally produced with enhanced security features, including watermarks, security thread, color - shifting ink, and special bond paper. These are designed to decrease the vulnerability to counterfeit items. To be recognized as a cashier 's check, words to that effect must be included in a prominent place on the front of the item.
The payee 's name, the written and numeric amount to be tendered, the remitter 's information, and other tracking information (such as the branch of issue), are printed on the front of the check. The check is generally signed by one or two bank employees or officers; however, some banks issue cashier 's checks featuring a facsimile signature of the bank 's chief executive officer or other senior official.
Some banks contract out the maintenance of their cashier 's check accounts and check issuing. One leading contractor is Integrated Payment Systems, which issues cashier 's checks and coordinates redemption of the items for many banks, in addition to issuing money orders and other payment instruments. In theory, checks issued by a financial institution but drawn on another institution, as is often the case with credit unions, are teller 's checks.
Due to an increase in fraudulent activities, starting in 2006 many banks insist upon waiting for a cashier 's check to clear the originating institution before making funds available for withdrawal. Personal checks will thus have the same utility in such transactions.
In the United States, under Article 3 of the Uniform Commercial Code, a cashier 's check is effective as a note of the issuing bank. Also, according to Regulation CC (Reg CC) of the Federal Reserve, cashier 's checks are recognized as "guaranteed funds '' and amounts under $5,000 are not subject to deposit hold, except in the case of new accounts. The length of a hold varies (2 days to 2 weeks) depending on the bank. It is not clear what length of time may pass before a bank can be held responsible for accepting a bad cashier 's check.
In Canada, bank drafts do not carry any different legal weight as opposed to a standard check, but are provided as a service to clients as a payment instrument with guaranteed funds. Drafts (or money orders depending on the issuing institution) usually have better security features than standard checks, and as such are often preferred when the receiver is concerned about receiving fraudulent payment instruments. However, bank drafts can also be subject to counterfeiting, and as such can be held or verified by depositing institutions in accordance with their hold funds policy, prior to providing access to the funds.
The term money order is used non-uniformly in Canada, with some institutions offering both money orders and bank drafts depending on the amount, with others only offering one or the other for any amount. Generally, both bank drafts and money orders are treated the same in regards to guaranteed funds and hold policies.
In many nations Money orders are a popular alternative to cashier 's checks and are considered safer than personal bank checks. However, in the United States, they are generally not recognized as "guaranteed funds '' under Reg CC and are limited to a specified maximum amount ($1,000 or less under U.S. law for domestic postal money orders).
Because of US regulatory requirements associated with the Patriot Act and the Bank Secrecy Act due to updated concerns over money laundering, most insurance and brokerage firms will no longer accept money orders as payment for insurance premiums or as deposits into brokerage accounts.
Counterfeit money orders and cashier 's checks have been used in certain scams to steal from those who sell their goods online on sites such as eBay and Craigslist.
The counterfeit cashier 's check scam is a scheme where the victim is sent a cashier 's check or money order for payment on an item for sale on the Internet. When the money order is taken to the bank it may not be detected as counterfeit for 10 business days or more, but the bank will deposit the money into the account and state that it has been "verified '' or is "clear '' in about 24 hours. This gives the victim a false feeling of security that the money order was real, so they proceed with the transaction. When the bank eventually discovers that the money order is counterfeit and reverses the account credit many days later, the customer will usually have already mailed the item. In many cases the "check '' or "money order '' is for more than the amount owed, and the victim is asked to refund the difference in cash.
|
how many law firms are there in singapore | Lawyers in Singapore - Wikipedia
Lawyers in Singapore are part of a fused profession, meaning that they may act as both a solicitor and as an advocate, although lawyers usually specialize in one of litigation, conveyancing or corporate law.
The number of lawyers in Singapore has declined in the first decade of the 21st century. There were 3,300 lawyers in 2006. Parliament approved changes in 2009 to replace the ' pupillage ' system with structured training, and to make it easier for lawyers to return to practise.
International law firms are generally limited to corporate, finance and banking law.
In 2007, there were 4200 lawyers practising law in Singapore, up from 4000 in 2002.
In July 2009, there were 95 foreign firms with offices in Singapore, and 840 foreign lawyers, up from 576 in 2000. Six international firms were given license to practice local corporate law for the first time in December 2008.
In 2012, there were 5200 lawyers practising in Singapore, according to statistics from the Ministry of Law.
Large firms such as Rajah & Tann Asia and Allen & Gledhill constitute about 20 % of the law industry 's practitioners.
|
what happens when you jump someone in chinese checkers | Chinese Checkers - wikipedia
Chinese checkers (US and Canadian spelling) or Chinese chequers (UK spelling) is a strategy board game of German origin (named "Sternhalma '') which can be played by two, three, four, or six people, playing individually or with partners. The game is a modern and simplified variant of the American game Halma.
The objective is to be first to race all of one 's pieces across the hexagram - shaped board into "home '' -- the corner of the star opposite one 's starting corner -- using single - step moves or moves that jump over other pieces. The remaining players continue the game to establish second -, third -, fourth -, fifth -, and last - place finishers. The rules are simple, so even young children can play.
Despite its name, the game is not a variation of checkers, nor did it originate in China or any part of Asia (whereas the game 象棋 xiangqi, or "Chinese chess '', is from China). The game was invented in Germany in 1892 under the name "Stern - Halma '' as a variation of the older American game Halma. The "Stern '' (German for star) refers to the board 's star shape (in contrast to the square board used in Halma).
The name "Chinese Checkers '' originated in the United States as a marketing scheme by Bill and Jack Pressman in 1928. The Pressman company 's game was originally called "Hop Ching Checkers ''.
The game was introduced to Chinese - speaking regions mostly by the Japanese.
The aim is to race all one 's pieces into the star corner on the opposite side of the board before opponents do the same. The destination corner is called home. Each player has 10 pieces, except in games between two players when 15 are used. (On bigger star boards, 15 or 21 pieces are used.)
In "hop across '', the most popular variation, each player starts with their colored pieces on one of the six points or corners of the star and attempts to race them all home into the opposite corner. Players take turns moving a single piece, either by moving one step in any direction to an adjacent empty space, or by jumping in one or any number of available consecutive hops over other single pieces. A player may not combine hopping with a single - step move -- a move consists of one or the other. There is no capturing in Chinese Checkers, so hopped pieces remain active and in play. Turns proceed clockwise around the board.
In the diagram, Green might move the topmost piece one space diagonally forward as shown. A hop consists of jumping over a single adjacent piece, either one 's own or an opponent 's, to the empty space directly beyond it in the same line of direction. Red might advance the indicated piece by a chain of three hops in a single move. It is not mandatory to make the most number of hops possible. (In some instances a player may choose to stop the jumping sequence part way in order to impede the opponent 's progress, or to align pieces for planned future moves.)
Can be played "all versus all '', or three teams of two. When playing teams, teammates usually sit at opposite corners of the star, with each team member controlling their own colored set of pieces. The first team to advance both sets to their home destination corners is the winner. The remaining players usually continue play to determine second - and third - place finishers, etc.
The four - player game is the same as the game for six players, except that two opposite corners will be unused.
In a three - player game, all players control either one or two sets of pieces each. If one set is used, pieces race across the board into empty, opposite corners. If two sets are used, each player controls two differently colored sets of pieces at opposite corners of the star.
In a two - player game, each player plays one, two, or three sets of pieces. If one set is played, the pieces usually go into the opponent 's starting corner, and the number of pieces per side is increased to 15 (instead of the usual 10). If two sets are played, the pieces can either go into the opponent 's starting corners, or one of the players ' two sets can go into an opposite empty corner. If three sets are played, the pieces usually go into the opponent 's starting corners.
A basic strategy is to create or find the longest hopping path that leads closest to home, or immediately into it. (Multiple - jump moves are obviously faster to advance pieces than step - by - step moves.) Since either player can make use of any hopping ' ladder ' or ' chain ' created, a more advanced strategy involves hindering an opposing player in addition to helping oneself make jumps across the board. Of equal importance are the players ' strategies for emptying and filling their starting and home corners. Games between top players are rarely decided by more than a couple of moves.
Differing numbers of players result in different starting layouts, in turn imposing different best - game strategies. For example, if a player 's home destination corner starts empty (i.e. is not an opponent 's starting corner), the player can freely build a ' ladder ' or ' bridge ' with their pieces between the two opposite ends. But if a player 's opponent occupies the home corner, the player may need to wait for opponent pieces to clear before filling the home vacancies.
While the standard rules allow hopping over only a single adjacent occupied position at a time (as in checkers), this version of the game allows pieces to catapult over multiple adjacent occupied positions in a line when hopping.
In the fast - paced or Super Chinese Checkers variant popular in France, a piece may hop over a non-adjacent piece. A hop consists of jumping over a distant piece (friendly or enemy) to a symmetrical position on the opposite side, in the same line of direction. (For example, if there are two empty positions between the jumping piece and the piece being jumped, the jumping piece lands leaving exactly two empty positions immediately beyond the jumped piece.) As in the standard rules, a jumping move may consist of any number of a chain of hops. (When making a chain of hops, a piece is usually allowed to enter an empty corner, as long as it hops out again before the move is completed.)
Jumping over two or more pieces in a hop is not allowed. Therefore, in this variant even more than in the standard version, it is sometimes strategically important to keep one 's pieces bunched in order to prevent a long opposing hop.
An alternative variant allows hops over any symmetrical arrangement, including pairs of pieces, pieces separated by empty positions, and so on.
In the capture variant, all sixty game pieces start out in the hexagonal field in the center of the gameboard. The center position is left unoccupied, so pieces form a symmetric hexagonal pattern. Color is irrelevant in this variant, so players take turns hopping any game piece over any other eligible game piece (s) on the board. The hopped - over pieces are captured (retired from the game, as in English draughts) and collected in the capturing player 's bin. Only jumping moves are allowed; the game ends when no further jumps are possible. The player with the most captured pieces is the winner.
The board is tightly packed at the start of the game; as more pieces are captured, the board frees up, often allowing multiple captures to take place in a single move.
Two or more players can compete in this variant, but if there are more than six players, not everyone will get a fair turn.
This variant resembles the game Leap Frog. The main difference being that in Leap Frog the board is a square board.
Diamond game is a variant of Chinese Checkers played in South Korea and Japan. It uses the same jump rule as in Chinese Checkers. The aim of the game is to enter all one 's pieces into the star corner on the opposite side of the board, before opponents do the same. Each player has ten or fifteen pieces. Ten - piece diamond uses a smaller gameboard than Chinese Checkers, with 73 spaces. Fifteen - piece diamond uses the same board as in Chinese Checkers, with 121 spaces. To play diamond each player selects one color and places their 10 or 15 pieces on a triangle. Two to six players can compete.
Bibliography
|
where is gray matter located in the brain | Grey matter - wikipedia
Grey matter (or gray matter) is a major component of the central nervous system, consisting of neuronal cell bodies, neuropil (dendrites and myelinated as well as unmyelinated axons), glial cells (astrocytes and oligodendrocytes), synapses, and capillaries. Grey matter is distinguished from white matter, in that it contains numerous cell bodies and relatively few myelinated axons, while white matter contains relatively few cell bodies and is composed chiefly of long - range myelinated axon tracts. The colour difference arises mainly from the whiteness of myelin. In living tissue, grey matter actually has a very light grey colour with yellowish or pinkish hues, which come from capillary blood vessels and neuronal cell bodies.
Grey matter refers to unmyelinated neurons and other cells of the central nervous system. It is present in the brain, brainstem and cerebellum, and present throughout the spinal cord.
Grey matter is distributed at the surface of the cerebral hemispheres (cerebral cortex) and of the cerebellum (cerebellar cortex), as well as in the depths of the cerebrum (thalamus; hypothalamus; subthalamus, basal ganglia -- putamen, globus pallidus, nucleus accumbens; septal nuclei), cerebellar (deep cerebellar nuclei -- dentate nucleus, globose nucleus, emboliform nucleus, fastigial nucleus), brainstem (substantia nigra, red nucleus, olivary nuclei, cranial nerve nuclei).
Grey matter in the spinal cord is known as the grey column which travels down the spinal cord distributed in three grey columns that are presented in an "H '' shape. The forward - facing column is the anterior grey column, the rear - facing one is the posterior grey column and the interlinking one is the lateral grey column. The grey matter on the left and right side is connected by the grey commissure. The grey matter in the spinal cord consists of interneurons, as well as the cell bodies of projection neurons.
Diagram of a spinal vertebra. The grey matter is in the central part of the spinal cord.
Cross-section of spinal cord with grey matter labelled.
Grey matter undergoes development and growth throughout childhood and adolescence. Recent studies using cross-sectional neuroimaging have shown that by around the age of 8 the volume of grey matter begins to decrease. However, the density of grey matter appears to increase as a child develops into early adulthood. Males tend to exhibit grey matter of increased volume but lower density than that of females.
Grey matter contains most of the brain 's neuronal cell bodies. The grey matter includes regions of the brain involved in muscle control, and sensory perception such as seeing and hearing, memory, emotions, speech, decision making, and self - control.
The grey matter in the spinal cord is split into three grey columns:
The grey matter of the spinal cord can be divided into different layers, called Rexed laminae. These describe, in general, the purpose of the cells within the grey matter of the spinal cord at a particular location.
Interneurons present in the grey matter of the spinal cord
Rexed laminae groups the grey matter in the spinal cord according to its function.
High alcohol consumption has been correlated with significant reductions in grey matter volume. Short - term cannabis use (30 days) is not correlated with changes in white or grey matter. However, several cross-sectional studies have shown that repeated long - term cannabis use is associated with smaller grey matter volumes in the hippocampus, amygdala, medial temporal cortex, and prefrontal cortex, with increased grey matter volume in the cerebellum. Long - term cannabis use also alters white matter integrity in an age - dependent manner, with heavy cannabis use during adolescence and early adulthood causing the greatest amount of damage.
Meditation has been shown to change grey matter structure.
Habitual playing of action video games has been reported to promote a reduction of grey matter in the hippocampus while 3D platformer games have been reported to increase grey matter in the hippocampus.
Women and men with equivalent IQ scores have differing proportions of grey to white matter in cortical brain regions associated with intelligence.
Pregnancy renders substantial changes in brain structure, primarily reductions in gray matter volume in regions subserving social cognition. The gray matter reductions endured for at least 2 years post-pregnancy.
In the current edition of the official Latin nomenclature, Terminologia Anatomica, substantia grisea is used for English grey matter. The adjective grisea for grey is however not attested in classical Latin. The adjective grisea is derived from the French word for grey, gris. Alternative designations like substantia cana and substantia cinerea are being used alternatively. The adjective cana, attested in classical Latin, can mean grey, or greyish white. The classical Latin cinerea means ash - coloured.
Human brain right dissected lateral view
Schematic representation of the chief ganglionic categories (I to V).
|
who sings what in the world's come over you | Jack Scott (singer) - wikipedia
Jack Scott (born Giovanni Domenico Scafone, Jr., January 24, 1936, Windsor, Ontario, Canada) is a Canadian American singer and songwriter. He was the first white rock and roll star to come out of Detroit, Michigan. He was inducted into Canadian Songwriters Hall of Fame in 2011 and has been called "undeniably the greatest Canadian rock and roll singer of all time. ''
Scott spent his early childhood in Windsor, Ontario (Canada), across the river from Detroit, Michigan (United States). When he was 10, Scott 's family moved to Hazel Park, a Detroit suburb. He grew up listening to hillbilly music and was taught to play the guitar by his Mother Laura. As a teenager, he pursued a singing career and recorded as ' Jack Scott. ' At the age of 18, he formed the Southern Drifters. After leading the band for three years, he signed to ABC - Paramount Records as a solo artist in 1957.
After recording two good - selling local hits for ABC - Paramount in 1957, he switched to the Carlton record label and had a double - sided national hit in 1958 with "Leroy '' (# 11) / "My True Love '' (# 3). The record sold over one million copies, earning Scott his first gold disc. Later in 1958, "With Your Love '' (# 28) reached the Top 40. In all, six of 12 songs on his first album became hit singles. On most of these tracks, he was backed up by the vocal group, the Chantones.
He served in the United States Army during most of 1959, just after "Goodbye Baby '' (# 8) made the Top Ten. 1959 also saw him chart with "The Way I Walk '' (# 35). Most of his Carlton master tapes were believed lost or destroyed until Rollercoaster Records in England released a vinyl EP, "Jack Scott Rocks '', and CD, "The Way I Walk '', which were for the most part mastered from original tapes rather than the disc dubs used for previous reissues.
At the beginning of 1960, Scott again changed record labels, this time to Top Rank Records. He then recorded four Billboard Hot 100 hits -- "What in the World 's Come Over You '' (# 5), "Burning Bridges '' (# 3) b / w "Oh Little One '' (# 34), and "It Only Happened Yesterday '' (# 38). "What in the World 's Come Over You '' was Scott 's second gold disc winner. Scott continued to record and perform during the 1960s and 1970s. His song "You 're Just Gettin ' Better '' reached the country charts in 1974. In May 1977, Scott recorded a Peel session for BBC Radio 1 disc jockey, John Peel.
Scott had more US singles (19), in a shorter period of time (41 months), than any other recording artist -- with the exception of The Beatles, Elvis Presley, Fats Domino and Connie Francis. Scott wrote all of his own hits, except one: "Burning Bridges. ''
His legacy ranks him with the top legends of rock and roll. It has been said that "with the exception of Roy Orbison and Elvis Presley, no white rock and roller of the time ever developed a finer voice with a better range than Jack Scott, or cut a more convincing body of work in Rockabilly, Rock and Roll, Country - Soul, Gospel or Blues ''.
In 2011 he was inducted into the Canadian Songwriters Hall of Fame. More recently Scott was nominated for the Hit Parade Hall of Fame. He is still actively singing and touring today and resides in a suburb of Detroit.
|
when did the queen of england get married | Elizabeth II - wikipedia
Elizabeth II (Elizabeth Alexandra Mary; born 21 April 1926) is Queen of the United Kingdom and the other Commonwealth realms.
Elizabeth was born in London as the first child of the Duke and Duchess of York, later King George VI and Queen Elizabeth, and she was educated privately at home. Her father acceded to the throne on the abdication of his brother King Edward VIII in 1936, from which time she was the heir presumptive. She began to undertake public duties during the Second World War, serving in the Auxiliary Territorial Service. In 1947, she married Philip, Duke of Edinburgh, a former prince of Greece and Denmark, with whom she has four children: Charles, Prince of Wales; Anne, Princess Royal; Andrew, Duke of York; and Edward, Earl of Wessex.
When her father died in February 1952, she became Head of the Commonwealth and queen regnant of seven independent Commonwealth countries: the United Kingdom, Canada, Australia, New Zealand, South Africa, Pakistan, and Ceylon. She has reigned through major constitutional changes, such as devolution in the United Kingdom, Canadian patriation, and the decolonisation of Africa. Between 1956 and 1992, the number of her realms varied as territories gained independence and realms, including South Africa, Pakistan, and Ceylon (renamed Sri Lanka), became republics. Her many historic visits and meetings include a state visit to the Republic of Ireland and visits to or from five popes. Significant events have included her coronation in 1953 and the celebrations of her Silver, Golden, and Diamond Jubilees in 1977, 2002, and 2012 respectively. In 2017, she became the first British monarch to reach a Sapphire Jubilee. She is the longest - lived and longest - reigning British monarch as well as the world 's longest - reigning queen regnant and female head of state, the oldest and longest - reigning current monarch and the longest - serving current head of state.
Elizabeth has occasionally faced republican sentiments and press criticism of the royal family, in particular after the breakdown of her children 's marriages, her annus horribilis in 1992 and the death in 1997 of her former daughter - in - law Diana, Princess of Wales. However, support for the monarchy has consistently been and remains high, as does her personal popularity.
Elizabeth was born at 02: 40 (GMT) on 21 April 1926, during the reign of her paternal grandfather, King George V. Her father, the Duke of York (later King George VI), was the second son of the King. Her mother, the Duchess of York (later Queen Elizabeth), was the youngest daughter of Scottish aristocrat the Earl of Strathmore and Kinghorne. She was delivered by Caesarean section at her maternal grandfather 's London house: 17 Bruton Street, Mayfair. She was baptised by the Anglican Archbishop of York, Cosmo Gordon Lang, in the private chapel of Buckingham Palace on 29 May, and named Elizabeth after her mother, Alexandra after George V 's mother, who had died six months earlier, and Mary after her paternal grandmother. Called "Lilibet '' by her close family, based on what she called herself at first, she was cherished by her grandfather George V, and during his serious illness in 1929 her regular visits were credited in the popular press and by later biographers with raising his spirits and aiding his recovery.
Elizabeth 's only sibling, Princess Margaret, was born in 1930. The two princesses were educated at home under the supervision of their mother and their governess, Marion Crawford. Lessons concentrated on history, language, literature and music. Crawford published a biography of Elizabeth and Margaret 's childhood years entitled The Little Princesses in 1950, much to the dismay of the royal family. The book describes Elizabeth 's love of horses and dogs, her orderliness, and her attitude of responsibility. Others echoed such observations: Winston Churchill described Elizabeth when she was two as "a character. She has an air of authority and reflectiveness astonishing in an infant. '' Her cousin Margaret Rhodes described her as "a jolly little girl, but fundamentally sensible and well - behaved ''.
During her grandfather 's reign, Elizabeth was third in the line of succession to the throne, behind her uncle Edward, Prince of Wales, and her father, the Duke of York. Although her birth generated public interest, she was not expected to become queen, as the Prince of Wales was still young. Many people believed he would marry and have children of his own. When her grandfather died in 1936 and her uncle succeeded as Edward VIII, she became second - in - line to the throne, after her father. Later that year, Edward abdicated, after his proposed marriage to divorced socialite Wallis Simpson provoked a constitutional crisis. Consequently, Elizabeth 's father became king, and she became heir presumptive. If her parents had had a later son, she would have lost her position as first - in - line, as her brother would have been heir apparent and above her in the line of succession.
Elizabeth received private tuition in constitutional history from Henry Marten, Vice-Provost of Eton College, and learned French from a succession of native - speaking governesses. A Girl Guides company, the 1st Buckingham Palace Company, was formed specifically so she could socialise with girls her own age. Later, she was enrolled as a Sea Ranger.
In 1939, Elizabeth 's parents toured Canada and the United States. As in 1927, when her parents had toured Australia and New Zealand, Elizabeth remained in Britain, since her father thought her too young to undertake public tours. Elizabeth "looked tearful '' as her parents departed. They corresponded regularly, and she and her parents made the first royal transatlantic telephone call on 18 May.
In September 1939, Britain entered the Second World War, which lasted until 1945. During the war, many of London 's children were evacuated to avoid the frequent aerial bombing. The suggestion by senior politician Lord Hailsham that the two princesses should be evacuated to Canada was rejected by Elizabeth 's mother, who declared, "The children wo n't go without me. I wo n't leave without the King. And the King will never leave. '' Princesses Elizabeth and Margaret stayed at Balmoral Castle, Scotland, until Christmas 1939, when they moved to Sandringham House, Norfolk. From February to May 1940, they lived at Royal Lodge, Windsor, until moving to Windsor Castle, where they lived for most of the next five years. At Windsor, the princesses staged pantomimes at Christmas in aid of the Queen 's Wool Fund, which bought yarn to knit into military garments. In 1940, the 14 - year - old Elizabeth made her first radio broadcast during the BBC 's Children 's Hour, addressing other children who had been evacuated from the cities. She stated: "We are trying to do all we can to help our gallant sailors, soldiers and airmen, and we are trying, too, to bear our share of the danger and sadness of war. We know, every one of us, that in the end all will be well. ''
In 1943, Elizabeth undertook her first solo public appearance on a visit to the Grenadier Guards, of which she had been appointed colonel the previous year. As she approached her 18th birthday, parliament changed the law so she could act as one of five Counsellors of State in the event of her father 's incapacity or absence abroad, such as his visit to Italy in July 1944. In February 1945, she was appointed as an honorary second subaltern in the Auxiliary Territorial Service with the service number of 230873. She trained as a driver and mechanic and was given the rank of honorary junior commander five months later.
At the end of the war in Europe, on Victory in Europe Day, Princesses Elizabeth and Margaret mingled anonymously with the celebratory crowds in the streets of London. Elizabeth later said in a rare interview, "We asked my parents if we could go out and see for ourselves. I remember we were terrified of being recognised... I remember lines of unknown people linking arms and walking down Whitehall, all of us just swept along on a tide of happiness and relief. ''
During the war, plans were drawn up to quell Welsh nationalism by affiliating Elizabeth more closely with Wales. Proposals, such as appointing her Constable of Caernarfon Castle or a patron of Urdd Gobaith Cymru (the Welsh League of Youth), were abandoned for several reasons, including fear of associating Elizabeth with conscientious objectors in the Urdd at a time when Britain was at war. Welsh politicians suggested she be made Princess of Wales on her 18th birthday. Home Secretary, Herbert Morrison supported the idea, but the King rejected it because he felt such a title belonged solely to the wife of a Prince of Wales and the Prince of Wales had always been the heir apparent. In 1946, she was inducted into the Welsh Gorsedd of Bards at the National Eisteddfod of Wales.
Princess Elizabeth went in 1947 on her first overseas tour, accompanying her parents through southern Africa. During the tour, in a broadcast to the British Commonwealth on her 21st birthday, she made the following pledge: "I declare before you all that my whole life, whether it be long or short, shall be devoted to your service and the service of our great imperial family to which we all belong. ''
Elizabeth met her future husband, Prince Philip of Greece and Denmark, in 1934 and 1937. They are second cousins once removed through King Christian IX of Denmark and third cousins through Queen Victoria. After another meeting at the Royal Naval College in Dartmouth in July 1939, Elizabeth -- though only 13 years old -- said she fell in love with Philip, and they began to exchange letters. She was 21 when their engagement was officially announced on 9 July 1947.
The engagement was not without controversy; Philip had no financial standing, was foreign - born (though a British subject who had served in the Royal Navy throughout the Second World War), and had sisters who had married German noblemen with Nazi links. Marion Crawford wrote, "Some of the King 's advisors did not think him good enough for her. He was a prince without a home or kingdom. Some of the papers played long and loud tunes on the string of Philip 's foreign origin. '' Later biographies reported Elizabeth 's mother initially opposed the union, dubbing Philip "The Hun ''. In later life, however, the Queen Mother told biographer Tim Heald that Philip was "an English gentleman ''.
Before the marriage, Philip renounced his Greek and Danish titles, officially converted from Greek Orthodoxy to Anglicanism, and adopted the style Lieutenant Philip Mountbatten, taking the surname of his mother 's British family. Just before the wedding, he was created Duke of Edinburgh and granted the style His Royal Highness.
Elizabeth and Philip were married on 20 November 1947 at Westminster Abbey. They received 2,500 wedding gifts from around the world. Because Britain had not yet completely recovered from the devastation of the war, Elizabeth required ration coupons to buy the material for her gown, which was designed by Norman Hartnell. In post-war Britain, it was not acceptable for the Duke of Edinburgh 's German relations, including his three surviving sisters, to be invited to the wedding. The Duke of Windsor, formerly King Edward VIII, was not invited either.
Elizabeth gave birth to her first child, Prince Charles, on 14 November 1948. One month earlier, the King had issued letters patent allowing her children to use the style and title of a royal prince or princess, to which they otherwise would not have been entitled as their father was no longer a royal prince. A second child, Princess Anne, was born in 1950.
Following their wedding, the couple leased Windlesham Moor, near Windsor Castle, until July 1949, when they took up residence at Clarence House in London. At various times between 1949 and 1951, the Duke of Edinburgh was stationed in the British Crown Colony of Malta as a serving Royal Navy officer. He and Elizabeth lived intermittently in Malta for several months at a time in the hamlet of Gwardamanġa, at Villa Guardamangia, the rented home of Philip 's uncle, Lord Mountbatten. The children remained in Britain.
During 1951, George VI 's health declined, and Elizabeth frequently stood in for him at public events. When she toured Canada and visited President Harry S. Truman in Washington, D.C., in October 1951, her private secretary, Martin Charteris, carried a draft accession declaration in case the King died while she was on tour. In early 1952, Elizabeth and Philip set out for a tour of Australia and New Zealand by way of Kenya. On 6 February 1952, they had just returned to their Kenyan home, Sagana Lodge, after a night spent at Treetops Hotel, when word arrived of the death of the King and consequently Elizabeth 's immediate accession to the throne. Philip broke the news to the new queen. Martin Charteris asked her to choose a regnal name; she chose to remain Elizabeth, "of course ''. She was proclaimed queen throughout her realms and the royal party hastily returned to the United Kingdom. She and the Duke of Edinburgh moved into Buckingham Palace.
With Elizabeth 's accession, it seemed probable the royal house would bear her husband 's name, becoming the House of Mountbatten, in line with the custom of a wife taking her husband 's surname on marriage. The British Prime Minister, Winston Churchill, and Elizabeth 's grandmother, Queen Mary, favoured the retention of the House of Windsor, and so on 9 April 1952 Elizabeth issued a declaration that Windsor would continue to be the name of the royal house. The Duke complained, "I am the only man in the country not allowed to give his name to his own children. '' In 1960, after the death of Queen Mary in 1953 and the resignation of Churchill in 1955, the surname Mountbatten - Windsor was adopted for Philip and Elizabeth 's male - line descendants who do not carry royal titles.
Amid preparations for the coronation, Princess Margaret told her sister she wished to marry Peter Townsend, a divorcé ‚ 16 years Margaret 's senior, with two sons from his previous marriage. The Queen asked them to wait for a year; in the words of Martin Charteris, "the Queen was naturally sympathetic towards the Princess, but I think she thought -- she hoped -- given time, the affair would peter out. '' Senior politicians were against the match and the Church of England did not permit remarriage after divorce. If Margaret had contracted a civil marriage, she would have been expected to renounce her right of succession. Eventually, she decided to abandon her plans with Townsend. In 1960, she married Antony Armstrong - Jones, who was created Earl of Snowdon the following year. They divorced in 1978; she did not remarry.
Despite the death of Queen Mary on 24 March, the coronation on 2 June 1953 went ahead as planned, as Mary had asked before she died. The ceremony in Westminster Abbey, with the exception of the anointing and communion, was televised for the first time. Elizabeth 's coronation gown was embroidered on her instructions with the floral emblems of Commonwealth countries: English Tudor rose; Scots thistle; Welsh leek; Irish shamrock; Australian wattle; Canadian maple leaf; New Zealand silver fern; South African protea; lotus flowers for India and Ceylon; and Pakistan 's wheat, cotton, and jute.
From Elizabeth 's birth onwards, the British Empire continued its transformation into the Commonwealth of Nations. By the time of her accession in 1952, her role as head of multiple independent states was already established. In 1953, the Queen and her husband embarked on a seven - month round - the - world tour, visiting 13 countries and covering more than 40,000 miles by land, sea and air. She became the first reigning monarch of Australia and New Zealand to visit those nations. During the tour, crowds were immense; three - quarters of the population of Australia were estimated to have seen her. Throughout her reign, the Queen has made hundreds of state visits to other countries and tours of the Commonwealth; she is the most widely travelled head of state.
In 1956, the British and French prime ministers, Sir Anthony Eden and Guy Mollet, discussed the possibility of France joining the Commonwealth. The proposal was never accepted and the following year France signed the Treaty of Rome, which established the European Economic Community, the precursor to the European Union. In November 1956, Britain and France invaded Egypt in an ultimately unsuccessful attempt to capture the Suez Canal. Lord Mountbatten claimed the Queen was opposed to the invasion, though Eden denied it. Eden resigned two months later.
The absence of a formal mechanism within the Conservative Party for choosing a leader meant that, following Eden 's resignation, it fell to the Queen to decide whom to commission to form a government. Eden recommended she consult Lord Salisbury, the Lord President of the Council. Lord Salisbury and Lord Kilmuir, the Lord Chancellor, consulted the British Cabinet, Winston Churchill, and the Chairman of the backbench 1922 Committee, resulting in the Queen appointing their recommended candidate: Harold Macmillan.
The Suez crisis and the choice of Eden 's successor led in 1957 to the first major personal criticism of the Queen. In a magazine, which he owned and edited, Lord Altrincham accused her of being "out of touch ''. Altrincham was denounced by public figures and slapped by a member of the public appalled by his comments. Six years later, in 1963, Macmillan resigned and advised the Queen to appoint the Earl of Home as prime minister, advice she followed. The Queen again came under criticism for appointing the prime minister on the advice of a small number of ministers or a single minister. In 1965, the Conservatives adopted a formal mechanism for electing a leader, thus relieving her of involvement.
In 1957, she made a state visit to the United States, where she addressed the United Nations General Assembly on behalf of the Commonwealth. On the same tour, she opened the 23rd Canadian Parliament, becoming the first monarch of Canada to open a parliamentary session. Two years later, solely in her capacity as Queen of Canada, she revisited the United States and toured Canada. In 1961, she toured Cyprus, India, Pakistan, Nepal, and Iran. On a visit to Ghana the same year, she dismissed fears for her safety, even though her host, President Kwame Nkrumah, who had replaced her as head of state, was a target for assassins. Harold Macmillan wrote, "The Queen has been absolutely determined all through... She is impatient of the attitude towards her to treat her as... a film star... She has indeed ' the heart and stomach of a man '... She loves her duty and means to be a Queen. '' Before her tour through parts of Quebec in 1964, the press reported extremists within the Quebec separatist movement were plotting Elizabeth 's assassination. No attempt was made, but a riot did break out while she was in Montreal; the Queen 's "calmness and courage in the face of the violence '' was noted.
Elizabeth 's pregnancies with Princes Andrew and Edward, in 1959 and 1963, mark the only times she has not performed the State Opening of the British parliament during her reign. In addition to performing traditional ceremonies, she also instituted new practices. Her first royal walkabout, meeting ordinary members of the public, took place during a tour of Australia and New Zealand in 1970.
The 1960s and 1970s saw an acceleration in the decolonisation of Africa and the Caribbean. Over 20 countries gained independence from Britain as part of a planned transition to self - government. In 1965, however, the Rhodesian Prime Minister, Ian Smith, in opposition to moves towards majority rule, declared unilateral independence from Britain while still expressing "loyalty and devotion '' to Elizabeth. Although the Queen dismissed him in a formal declaration, and the international community applied sanctions against Rhodesia, his regime survived for over a decade. As Britain 's ties to its former empire weakened, the British government sought entry to the European Community, a goal it achieved in 1973.
In February 1974, the British Prime Minister, Edward Heath, advised the Queen to call a general election in the middle of her tour of the Austronesian Pacific Rim, requiring her to fly back to Britain. The election resulted in a hung parliament; Heath 's Conservatives were not the largest party, but could stay in office if they formed a coalition with the Liberals. Heath only resigned when discussions on forming a coalition foundered, after which the Queen asked the Leader of the Opposition, Labour 's Harold Wilson, to form a government.
A year later, at the height of the 1975 Australian constitutional crisis, the Australian Prime Minister, Gough Whitlam, was dismissed from his post by Governor - General Sir John Kerr, after the Opposition - controlled Senate rejected Whitlam 's budget proposals. As Whitlam had a majority in the House of Representatives, Speaker Gordon Scholes appealed to the Queen to reverse Kerr 's decision. She declined, saying she would not interfere in decisions reserved by the Constitution of Australia for the governor - general. The crisis fuelled Australian republicanism.
In 1977, Elizabeth marked the Silver Jubilee of her accession. Parties and events took place throughout the Commonwealth, many coinciding with her associated national and Commonwealth tours. The celebrations re-affirmed the Queen 's popularity, despite virtually coincident negative press coverage of Princess Margaret 's separation from her husband. In 1978, the Queen endured a state visit to the United Kingdom by Romania 's communist leader, Nicolae Ceaușescu, and his wife, Elena, though privately she thought they had "blood on their hands ''. The following year brought two blows: one was the unmasking of Anthony Blunt, former Surveyor of the Queen 's Pictures, as a communist spy; the other was the assassination of her relative and in - law Lord Mountbatten by the Provisional Irish Republican Army.
According to Paul Martin, Sr., by the end of the 1970s the Queen was worried the Crown "had little meaning for '' Pierre Trudeau, the Canadian Prime Minister. Tony Benn said the Queen found Trudeau "rather disappointing ''. Trudeau 's supposed republicanism seemed to be confirmed by his antics, such as sliding down banisters at Buckingham Palace and pirouetting behind the Queen 's back in 1977, and the removal of various Canadian royal symbols during his term of office. In 1980, Canadian politicians sent to London to discuss the patriation of the Canadian constitution found the Queen "better informed... than any of the British politicians or bureaucrats ''. She was particularly interested after the failure of Bill C - 60, which would have affected her role as head of state. Patriation removed the role of the British parliament from the Canadian constitution, but the monarchy was retained. Trudeau said in his memoirs that the Queen favoured his attempt to reform the constitution and that he was impressed by "the grace she displayed in public '' and "the wisdom she showed in private ''.
During the 1981 Trooping the Colour ceremony, six weeks before the wedding of Prince Charles and Lady Diana Spencer, six shots were fired at the Queen from close range as she rode down The Mall on her horse, Burmese. Police later discovered the shots were blanks. The 17 - year - old assailant, Marcus Sarjeant, was sentenced to five years in prison and released after three. The Queen 's composure and skill in controlling her mount were widely praised.
Months later, in October, the Queen was the subject of another attack while on a visit to Dunedin, New Zealand. New Zealand Security Intelligence Service documents declassified in 2018 revealed that 17 - year - old Christopher John Lewis fired a shot with a. 22 rifle from the fifth floor of a building overlooking the parade, but missed. Lewis was arrested, but never charged with attempted murder or treason, and sentenced to three years in jail for unlawful possession and discharge of a firearm. Two years into his sentence, he attempted to escape a psychiatric hospital in order to assassinate Prince Charles, who was visiting the country with Diana, Princess of Wales, and Prince William.
From April to September 1982, the Queen was anxious but proud of her son, Prince Andrew, who was serving with British forces during the Falklands War. On 9 July, the Queen awoke in her bedroom at Buckingham Palace to find an intruder, Michael Fagan, in the room with her. In a serious lapse of security, assistance only arrived after two calls to the Palace police switchboard. After hosting US President Ronald Reagan at Windsor Castle in 1982 and visiting his California ranch in 1983, the Queen was angered when his administration ordered the invasion of Grenada, one of her Caribbean realms, without informing her.
Intense media interest in the opinions and private lives of the royal family during the 1980s led to a series of sensational stories in the press, not all of which were entirely true. As Kelvin MacKenzie, editor of The Sun, told his staff: "Give me a Sunday for Monday splash on the Royals. Do n't worry if it 's not true -- so long as there 's not too much of a fuss about it afterwards. '' Newspaper editor Donald Trelford wrote in The Observer of 21 September 1986: "The royal soap opera has now reached such a pitch of public interest that the boundary between fact and fiction has been lost sight of... it is not just that some papers do n't check their facts or accept denials: they do n't care if the stories are true or not. '' It was reported, most notably in The Sunday Times of 20 July 1986, that the Queen was worried that Margaret Thatcher 's economic policies fostered social divisions and was alarmed by high unemployment, a series of riots, the violence of a miners ' strike, and Thatcher 's refusal to apply sanctions against the apartheid regime in South Africa. The sources of the rumours included royal aide Michael Shea and Commonwealth Secretary - General Shridath Ramphal, but Shea claimed his remarks were taken out of context and embellished by speculation. Thatcher reputedly said the Queen would vote for the Social Democratic Party -- Thatcher 's political opponents. Thatcher 's biographer John Campbell claimed "the report was a piece of journalistic mischief - making ''. Belying reports of acrimony between them, Thatcher later conveyed her personal admiration for the Queen, and the Queen gave two honours in her personal gift -- membership in the Order of Merit and the Order of the Garter -- to Thatcher after her replacement as prime minister by John Major. Former Canadian Prime Minister Brian Mulroney said Elizabeth was a "behind the scenes force '' in ending apartheid.
By the end of the 1980s, the Queen had become the target of satire. The involvement of younger members of the royal family in the charity game show It 's a Royal Knockout in 1987 was ridiculed. In Canada, Elizabeth publicly supported politically divisive constitutional amendments, prompting criticism from opponents of the proposed changes, including Pierre Trudeau. The same year, the elected Fijian government was deposed in a military coup. As monarch of Fiji, Elizabeth supported the attempts of the Governor - General, Ratu Sir Penaia Ganilau, to assert executive power and negotiate a settlement. Coup leader Sitiveni Rabuka deposed Ganilau and declared Fiji a republic.
In 1991, in the wake of coalition victory in the Gulf War, the Queen became the first British monarch to address a joint meeting of the United States Congress.
In a speech on 24 November 1992, to mark the 40th anniversary of her accession, Elizabeth called 1992 her annus horribilis, meaning horrible year. Republican feeling in Britain had risen because of press estimates of the Queen 's private wealth -- which were contradicted by the Palace -- and reports of affairs and strained marriages among her extended family. In March, her second son, Prince Andrew, and his wife, Sarah, separated; in April, her daughter, Princess Anne, divorced Captain Mark Phillips; during a state visit to Germany in October, angry demonstrators in Dresden threw eggs at her; and, in November, a large fire broke out at Windsor Castle, one of her official residences. The monarchy came under increased criticism and public scrutiny. In an unusually personal speech, the Queen said that any institution must expect criticism, but suggested it be done with "a touch of humour, gentleness and understanding ''. Two days later, the Prime Minister, John Major, announced reforms to the royal finances planned since the previous year, including the Queen paying income tax from 1993 onwards, and a reduction in the civil list. In December, Prince Charles and his wife, Diana, formally separated. The year ended with a lawsuit as the Queen sued The Sun newspaper for breach of copyright when it published the text of her annual Christmas message two days before it was broadcast. The newspaper was forced to pay her legal fees and donated £ 200,000 to charity.
In the years to follow, public revelations on the state of Charles and Diana 's marriage continued. Even though support for republicanism in Britain seemed higher than at any time in living memory, republicanism was still a minority viewpoint, and the Queen herself had high approval ratings. Criticism was focused on the institution of the monarchy itself and the Queen 's wider family rather than her own behaviour and actions. In consultation with her husband and the Prime Minister, John Major, as well as the Archbishop of Canterbury, George Carey, and her private secretary, Robert Fellowes, she wrote to Charles and Diana at the end of December 1995, saying a divorce was desirable.
In August 1997, a year after the divorce, Diana was killed in a car crash in Paris. The Queen was on holiday with her extended family at Balmoral. Diana 's two sons by Charles -- Princes William and Harry -- wanted to attend church and so the Queen and Prince Philip took them that morning. After that single public appearance, for five days the Queen and the Duke shielded their grandsons from the intense press interest by keeping them at Balmoral where they could grieve in private, but the royal family 's seclusion and the failure to fly a flag at half - mast over Buckingham Palace caused public dismay. Pressured by the hostile reaction, the Queen agreed to return to London and do a live television broadcast on 5 September, the day before Diana 's funeral. In the broadcast, she expressed admiration for Diana and her feelings "as a grandmother '' for the two princes. As a result, much of the public hostility evaporated.
In November 1997, the Queen and her husband held a reception at Banqueting House to mark their golden wedding anniversary. She made a speech and praised Philip for his role as a consort, referring to him as "my strength and stay ''.
In 2002, Elizabeth marked her Golden Jubilee. Her sister and mother died in February and March respectively, and the media speculated whether the Jubilee would be a success or a failure. She again undertook an extensive tour of her realms, which began in Jamaica in February, where she called the farewell banquet "memorable '' after a power cut plunged the King 's House, the official residence of the governor - general, into darkness. As in 1977, there were street parties and commemorative events, and monuments were named to honour the occasion. A million people attended each day of the three - day main Jubilee celebration in London, and the enthusiasm shown by the public for the Queen was greater than many journalists had expected.
Though generally healthy throughout her life, in 2003 she had keyhole surgery on both knees. In October 2006, she missed the opening of the new Emirates Stadium because of a strained back muscle that had been troubling her since the summer.
In May 2007, The Daily Telegraph, citing unnamed sources, reported the Queen was "exasperated and frustrated '' by the policies of the British Prime Minister, Tony Blair, that she was concerned the British Armed Forces were overstretched in Iraq and Afghanistan, and that she had raised concerns over rural and countryside issues with Blair. She was, however, said to admire Blair 's efforts to achieve peace in Northern Ireland. She became the first British monarch to celebrate a diamond wedding anniversary in November 2007. On 20 March 2008, at the Church of Ireland St Patrick 's Cathedral, Armagh, the Queen attended the first Maundy service held outside England and Wales.
The Queen addressed the United Nations for a second time in 2010, again in her capacity as Queen of all Commonwealth realms and Head of the Commonwealth. The UN Secretary General, Ban Ki - moon, introduced her as "an anchor for our age ''. During her visit to New York, which followed a tour of Canada, she officially opened a memorial garden for British victims of the September 11 attacks. The Queen 's visit to Australia in October 2011 -- her sixteenth visit since 1954 -- was called her "farewell tour '' in the press because of her age. By invitation of the Irish President, Mary McAleese, the Queen made the first state visit to the Republic of Ireland by a British monarch in May 2011.
Elizabeth 's Diamond Jubilee in 2012 marked 60 years on the throne, and celebrations were held throughout her realms, the wider Commonwealth, and beyond. In a message released on Accession Day, Elizabeth wrote:
In this special year, as I dedicate myself anew to your service, I hope we will all be reminded of the power of togetherness and the convening strength of family, friendship and good neighbourliness... I hope also that this Jubilee year will be a time to give thanks for the great advances that have been made since 1952 and to look forward to the future with clear head and warm heart.
She and her husband undertook an extensive tour of the United Kingdom, while her children and grandchildren embarked on royal tours of other Commonwealth states on her behalf. On 4 June, Jubilee beacons were lit around the world. In November, the Queen and her husband celebrated their sapphire wedding anniversary. On 18 December, she became the first British sovereign to attend a peacetime Cabinet meeting since George III in 1781.
The Queen, who opened the 1976 Summer Olympics in Montreal, also opened the 2012 Summer Olympics and Paralympics in London, making her the first head of state to open two Olympic Games in two countries. For the London Olympics, she played herself in a short film as part of the opening ceremony, alongside Daniel Craig as James Bond. On 4 April 2013, she received an honorary BAFTA for her patronage of the film industry and was called "the most memorable Bond girl yet '' at the award ceremony.
On 3 March 2013, Elizabeth was admitted to King Edward VII 's Hospital as a precaution after developing symptoms of gastroenteritis. She returned to Buckingham Palace the following day. A week later, she signed the new Commonwealth charter. Because of her age and the need for her to limit travelling, in 2013 she chose not to attend the biennial meeting of Commonwealth heads of government for the first time in 40 years. She was represented at the summit in Sri Lanka by her son, Prince Charles. She had cataract surgery in May 2018.
The Queen surpassed her great - great - grandmother, Queen Victoria, to become the longest - lived British monarch on 21 December 2007, and the longest - reigning British monarch and longest - reigning queen regnant and female head of state in the world on 9 September 2015. She is also the "longest - reigning sovereign in Canada 's modern era ''. (King Louis XIV of France reigned over Canada (New France) for longer than Elizabeth.) She became the oldest current monarch after King Abdullah of Saudi Arabia died on 23 January 2015. She later became the longest - reigning current monarch and the longest - serving current head of state following the death of King Bhumibol of Thailand on 13 October 2016, and the oldest current head of state on the resignation of Robert Mugabe on 21 November 2017. On 6 February 2017, she became the first British monarch to commemorate a Sapphire Jubilee, and on 20 November, she was the first British monarch to celebrate a platinum wedding anniversary. Prince Philip had retired from his official duties as the Queen 's consort in August.
The Queen does not intend to abdicate, though Prince Charles is expected to take on more of her duties as Elizabeth, who celebrated her 92nd birthday in 2018, carries out fewer public engagements. On 20 April 2018, the government leaders of the Commonwealth of Nations announced that she will be succeeded by Prince Charles as head of the Commonwealth. The Queen stated it was her "sincere wish '' that the Prince of Wales would follow her in the role. Plans for her death and funeral have been extensively prepared by most British government and media organisations for decades.
Since Elizabeth rarely gives interviews, little is known of her personal feelings. As a constitutional monarch, she has not expressed her own political opinions in a public forum. She does have a deep sense of religious and civic duty, and takes her coronation oath seriously. Aside from her official religious role as Supreme Governor of the established Church of England, she is a member of that church and also of the national Church of Scotland. She has demonstrated support for inter-faith relations and has met with leaders of other churches and religions, including five popes: Pius XII, John XXIII, John Paul II, Benedict XVI and Francis. A personal note about her faith often features in her annual Christmas message broadcast to the Commonwealth. In 2000, she said:
To many of us, our beliefs are of fundamental importance. For me the teachings of Christ and my own personal accountability before God provide a framework in which I try to lead my life. I, like so many of you, have drawn great comfort in difficult times from Christ 's words and example.
She is patron of over 600 organisations and charities. Her main leisure interests include equestrianism and dogs, especially her Pembroke Welsh Corgis. Her lifelong love of corgis began in 1933 with Dookie, the first corgi owned by her family. Scenes of a relaxed, informal home life have occasionally been witnessed; she and her family, from time to time, prepare a meal together and do the washing up afterwards.
In the 1950s, as a young woman at the start of her reign, Elizabeth was depicted as a glamorous "fairytale Queen ''. After the trauma of the Second World War, it was a time of hope, a period of progress and achievement heralding a "new Elizabethan age ''. Lord Altrincham 's accusation in 1957 that her speeches sounded like those of a "priggish schoolgirl '' was an extremely rare criticism. In the late 1960s, attempts to portray a more modern image of the monarchy were made in the television documentary Royal Family and by televising Prince Charles 's investiture as Prince of Wales. In public, she took to wearing mostly solid - colour overcoats and decorative hats, which allow her to be seen easily in a crowd.
At her Silver Jubilee in 1977, the crowds and celebrations were genuinely enthusiastic, but in the 1980s, public criticism of the royal family increased, as the personal and working lives of Elizabeth 's children came under media scrutiny. Elizabeth 's popularity sank to a low point in the 1990s. Under pressure from public opinion, she began to pay income tax for the first time, and Buckingham Palace was opened to the public. Discontent with the monarchy reached its peak on the death of Diana, Princess of Wales, though Elizabeth 's personal popularity and support for the monarchy rebounded after her live television broadcast to the world five days after Diana 's death.
In November 1999, a referendum in Australia on the future of the Australian monarchy favoured its retention in preference to an indirectly elected head of state. Polls in Britain in 2006 and 2007 revealed strong support for Elizabeth, and in 2012, her Diamond Jubilee year, approval ratings hit 90 percent. Referendums in Tuvalu in 2008 and Saint Vincent and the Grenadines in 2009 both rejected proposals to become republics.
Elizabeth has been portrayed in a variety of media by many notable artists, including painters Pietro Annigoni, Peter Blake, Chinwe Chukwuogo - Roy, Terence Cuneo, Lucian Freud, Rolf Harris, Damien Hirst, Juliet Pannett, and Tai - Shan Schierenberg. Notable photographers of Elizabeth have included Cecil Beaton, Yousuf Karsh, Annie Leibovitz, Lord Lichfield, Terry O'Neill, John Swannell, and Dorothy Wilding. The first official portrait of Elizabeth was taken by Marcus Adams in 1926.
Elizabeth 's personal fortune has been the subject of speculation for many years. In 1971 Jock Colville, her former private secretary and a director of her bank, Coutts, estimated her wealth at £ 2 million (equivalent to about £ 26 million in 2016). In 1993, Buckingham Palace called estimates of £ 100 million "grossly overstated ''. In 2002, she inherited an estate worth an estimated £ 70 million from her mother. The Sunday Times Rich List 2017 estimated her personal wealth at £ 360 million, making her the 329th richest person in the UK.
The Royal Collection, which includes thousands of historic works of art and the Crown Jewels, is not owned by the Queen personally but is held in trust, as are her official residences, such as Buckingham Palace and Windsor Castle, and the Duchy of Lancaster, a property portfolio valued at £ 472 million in 2015. Sandringham House and Balmoral Castle are personally owned by the Queen. The British Crown Estate -- with holdings of £ 12 billion in 2016 -- is held in trust and can not be sold or owned by Elizabeth in a personal capacity.
Elizabeth has held many titles and honorary military positions throughout the Commonwealth, is Sovereign of many orders in her own countries, and has received honours and awards from around the world. In each of her realms she has a distinct title that follows a similar formula: Queen of Jamaica and her other realms and territories in Jamaica, Queen of Australia and her other realms and territories in Australia, etc. In the Channel Islands and Isle of Man, which are Crown dependencies rather than separate realms, she is known as Duke of Normandy and Lord of Mann, respectively. Additional styles include Defender of the Faith and Duke of Lancaster. When in conversation with the Queen, the practice is to initially address her as Your Majesty and thereafter as Ma'am.
From 21 April 1944 until her accession, Elizabeth 's arms consisted of a lozenge bearing the royal coat of arms of the United Kingdom differenced with a label of three points argent, the centre point bearing a Tudor rose and the first and third a cross of St George. Upon her accession, she inherited the various arms her father held as sovereign. The Queen also possesses royal standards and personal flags for use in the United Kingdom, Canada, Australia, New Zealand, Jamaica, Barbados, and elsewhere.
|
what came first jack daniels or jim beam | Jim Beam - Wikipedia
Jim Beam is a brand of bourbon whiskey produced in Clermont, Kentucky, by Beam Suntory, a subsidiary of Suntory Holdings of Osaka, Japan. It is one of the best - selling brands of bourbon in the world. Since 1795 (interrupted by Prohibition), seven generations of the Beam family have been involved in whiskey production for the company that produces the brand, which was given the name "Jim Beam '' in 1933 in honor of James B. Beam, who rebuilt the business after Prohibition ended. Previously produced by the Beam family and later owned by the Fortune Brands holding company, the brand was purchased by Suntory Holdings in 2014.
During the late 18th century, members of the Böhm family, who eventually changed the spelling of their surname to "Beam '', emigrated from Germany and settled in Kentucky.
Johannes "Reginald '' Beam (1760 -- 1834) was a farmer who began producing whiskey in the style that became known as bourbon. Jacob Beam sold his first barrels of corn whiskey around 1795. The whiskey was first called Old Jake Beam Sour Mash, and the distillery was known as Old Tub.
David Beam (1802 -- 1854) took on his father 's responsibilities in 1820 at the age of 18, expanding distribution of the family 's bourbon during a time of industrial revolution. David M. Beam (1833 -- 1913) in 1854 moved the distillery to Nelson County to capitalize on the growing network of railroad lines connecting states. James Beauregard Beam (1864 -- 1947) managed the family business before and after Prohibition, rebuilding the distillery in 1933 in Clermont, Kentucky, near his Bardstown home. James B. Beam Distilling Company was founded in 1935 by Harry L. Homel, Oliver Jacobson, H. Blum and Jeremiah Beam. From this point forward, the bourbon would be called "Jim Beam Bourbon '' after James Beauregard Beam, and some of the bottle labels bear the statement, "None Genuine Without My Signature '' with the signature James B. Beam. T. Jeremiah Beam (1899 -- 1977) started working at the Clear Springs distillery in 1913, later becoming the master distiller and overseeing operations at the new Clermont facility. Jeremiah Beam eventually gained full ownership and opened a second distillery near Boston, Kentucky, in 1954. Jeremiah later teamed up with childhood friend Jimberlain Joseph Quinn, to expand the enterprise.
Booker Noe (1929 -- 2004), birth name Frederick Booker Noe II, grandson of Jim Beam, was the Master Distiller at the Jim Beam Distillery for more than 40 years, working closely with Master Distiller Jerry Dalton (1998 -- 2007). In 1987 Booker introduced his own namesake bourbon, Booker 's, the company 's first uncut, straight - from - the - barrel bourbon, and the first of the company 's "Small Batch Bourbon Collection ''.
Fred Noe (1957 -- present), birth name Frederick Booker Noe III, became the seventh generation Beam family distiller in 2007 and regularly travels for promotional purposes.
The Beam family has also played a major role in the history of the Heaven Hill Distillery. All of the Master Distillers at Heaven Hill since its founding have been members of the Beam family. The original Master Distiller at Heaven Hill was Joseph L. Beam, Jim Beam 's first cousin. He was followed by his son, Harry, who was followed by Earl Beam, the son of Jim Beam 's brother, Park. Earl Beam was then succeeded by the current Heaven Hill Master Distillers, Parker Beam and his son, Craig Beam.
In 1987, Jim Beam purchased National Brands, acquiring brands including Old Crow, Bourbon de Luxe, Old Taylor, Old Grand - Dad, and Sunny Brook. Old Taylor was subsequently sold to the Sazerac Company.
On August 4, 2003, a fire destroyed a Jim Beam aging warehouse in Bardstown, Kentucky. It held about 19,000 barrels of bourbon. Flames rose more than 100 feet from the burning structure. Burning bourbon spilled from the warehouse and set a nearby creek on fire. An estimated 19,000 fish died of the bourbon in the creek and a river.
For some period of time, Jim Beam was part of the holding company formerly known as Fortune Brands that was dismantled in 2011. Other parts of the remaining company were spun off as an IPO on the NYSE on the same day, as Fortune Brands Home & Security, and the liquor division of the holding company was renamed Beam, Inc. on October 4, 2011. In January 2014, it was announced that Beam Inc. would be purchased by Suntory Holdings Ltd., a Japanese group of brewers & distillers known for producing Japan 's first whiskey. The combined company is known as Beam Suntory.
In the history of the brand now known as Jim Beam, there have been seven generations of distillers from the Beam (and Noe) family. Retired Master Distiller Jerry Dalton (1998 -- 2007) was the first non-Beam to be Master Distiller at the company, and his successor was a member of the family.
Several varieties bearing the Jim Beam name are available.
All are 70 proof (35 % ABV)
Several of these offerings have performed quite well at international spirits ratings competitions. For example, Jim Beam 's Black label was awarded a gold medal at the 2012 San Francisco World Spirits Competition. Jim Beam Black also won a Gold Outstanding medal at the 2013 International Wine and Spirit Competition.
The Beam "Small Batch Bourbon Collection '' consists of several bourbons where the Beam name appears on the labels and marketing materials but is less prominent.
Bourbon whiskey distillers must follow government standards for production. By law (27 C.F.R. 5), any "straight '' bourbon must be: produced in the United States; made of a grain mix of at least 51 % corn; distilled at no higher than 160 proof (80 % ABV); free of any additives (except water to reduce proof for aging and bottling); aged in new, charred white oak barrels; entered into the aging barrels at no higher than 125 proof (62.5 % ABV), aged for a minimum of 2 years, and bottled at no less than 80 proof (40 % ABV).
Jim Beam starts with water filtered naturally by the limestone shelf found in Central Kentucky. A strain of yeast used since the end of Prohibition is added to a tank with the grains to create what is known as "dona yeast '', used later in the fermentation process. Hammer mills grind the mix of corn, rye and barley malt to break it down for easier cooking. The mix is then moved into a large mash cooker where water and set back are added. The "set back '' is a portion of the old mash from the previous distillation -- the key step of the sour mash process, ensuring consistency from batch to batch.
From the cooker, the mash heads to the fermenter where it is cooled to 60 -- 70 ° F and yeast is added again. The yeast is fed by the sugars in the mash, producing heat, carbon dioxide and alcohol. Called "distiller 's beer '' or "wash '', the resulting liquid (after filtering to remove solids) looks, smells and tastes like (and essentially is) a form of beer. The wash is pumped into a column still where it is heated to over 200 ° F, causing the alcohol to turn to a vapor. As the vapor cools and falls it turns to a liquid called "low wine '', which measures 125 proof or 62.5 % alcohol. A second distillation in a pot still heats and condenses the liquid into "high wine '', which reaches 135 proof (67.5 % alcohol).
The high wine is moved to new, charred American oak barrels, each of which hold about 53 gallons of liquid. A "bung '' is used to seal the barrels before moving them to nearby hilltop rackhouses where they will age up to nine years. As the seasons change, natural weather variations expand and contract the barrel wood, allowing bourbon to seep into the barrel, and the caramelized sugars from the charred oak flavor and color the bourbon. A significant portion (known as the "angel 's share '') of the 53 gallons of bourbon escapes the barrel through evaporation, or stays trapped in the wood of the barrel. Jim Beam ages for at least four years, or twice as long as the government requires for a "straight '' bourbon. At the end of the aging period the amber liquid is filtered, bottled, packaged and sent to one of many distributors around the world using the three - tier distribution system.
On July 26, 2004, THANASI Foods announced the release of Jim Beam Soaked Sunflower Seeds, a snack product soaked in Jim Beam and available in 3 flavors; Original, Barbeque, and Jalapeño. On October 18, 2004, the company announced the addition of Jim Beam Soaked Beef Jerky to the range. Jim Beam has a licensing agreement with Vita Food Products to manufacture and sell Jim Beam BBQ Sauces, Marinades, Mustards, Steak Sauces, Hot Sauce, Wing Sauce, Pancake Syrup and Glazes. Vita Specialty Foods also produces a range of Jim Beam hot smoked and fresh, marinated salmon. Top Shelf Gourmet specializes in Jim Beam bourbon - infused fresh pork and poultry products, including Jim Beam Bourbon Barrel Ham, Pulled Pork, and Pulled Chicken.
Brandmark Products produces a full range of Jim Beam branded billiard and home recreation products. Zippo produces a range of Jim Beam branded pocket and multi-purpose lighters. Bradley Smoker produces a line of Jim Beam branded smokers and smoking briquettes made from Jim Beam Barrels. Silver Buffalo designs Jim Beam wall art, dartboards and accessories for home recreational use. Concept One develops Jim Beam headwear. Headline Entertainment develops Jim Beam T - shirts and outerwear. Sherwood Brands produces a full line of Jim Beam gift sets.
Outside the United States, Beam Global Spirits & Wine has had a sales and distribution alliance with The Edrington Group since 2009.
|
who played xander on days of our lives | Paul Telfer (actor) - wikipedia
Paul Telfer (born 30 October 1979) is a Scottish actor, who has lived and worked in both his native United Kingdom and the United States. He portrayed the role of Xander Kiriakis on the NBC 's soap opera drama series Days of Our Lives.
Telfer appeared in episodes of two Sky One series: Is Harry on the Boat? (2002) (as Matt) and Mile High (2003) (as Rory).
He also appeared in a series of ancient history and mythological epics: as Gannicus in the 2004 TV movie Spartacus, Hephaestion in the 2007 movie Young Alexander the Great and the title role in the 2005 TV miniseries Hercules.
In 2007, Telfer appeared in five episodes of the second series of the BBC drama Hotel Babylon (as Luke), and in three episodes of the TV series NCIS,, as Marine Corporal Damon Werth: "Corporal Punishment '', "Outlaws and In - laws '' and "Jack Knife. ''
Telfer lived in New Zealand during the past few years, and has co-written theatre work. Away from acting, he posed naked in the women 's magazine Cosmopolitan in support of the testicular and prostate cancer charity Everyman.
He appeared in the 2011 movie Son of Morning.
In 2012, he appeared in The Vampire Diaries, 3 episodes of season four, titled The Five (The Vampire Diaries), as Alexander.
In late January 2015, Telfer played a small role as Victor Kiriakis 's hit man, Damon, before being cast in the contract role of Victor 's nephew Xander Kiriakis on Days of Our Lives. He debuted on 24 March 2015 and departed on 25 August 2015.
He appeared as "Lord Macintosh '' in the episodes "The Bear and the Bow '' and "The Bear King '' on the ABC television series Once Upon a Time broadcast November 2015.
Telfer graduated with First Class honours in Film Studies from the University of Kent at Canterbury in 1999. He is married to Broadway actress Carmen Cusack.
|
the aztec objective of the flowery war was to | Flower war - wikipedia
A flower war or flowery war (Nahuatl: xōchiyāōyōtl, Spanish: guerra florida) was a ritual war fought intermittently between the Aztec Triple Alliance and its enemies from the "mid-1450s to the arrival of the Spaniards in 1519. '' Enemies included the city - states of Tlaxcala, Huejotzingo, and Cholula in the Tlaxcala - Pueblan Valley in central Mexico. In these wars, participants would fight according to a set of conventions.
Texcocan nobleman Ixtlilxochitl gives the "fullest early statement concerning the origin as well as the initial rationale '' of the flower war. From 1450 to 1454, the Aztecs had suffered from crop failure and severe drought; this led to famine and many deaths in the central Mexican highlands. Ixtlilxochitl reports that the flower war began "as a response '' to the famine: "the priests... of Mexico (Tenochtitlan) said that the gods were angry at the empire, and that to placate them it was necessary to sacrifice many men, and that this had to be done regularly. '' Thus, Tenochtitlan (the Aztec capital), Texcoco, Tlaxcala, Cholula, and Huejotzingo agreed to engage in flower war for the purpose of obtaining human sacrifice s for the gods. However, scholars such as Hicks disagree with using Ixlilxochitl 's writings as the origin story of the flower war, due to Ixtlilxochitl not specifically mentioning "flower war '' and being the only known source to record these events.
Flower wars differed from typical wars in a number of important aspects. While engaging in a flower war, competing armies would meet on a "preset date at a preselected place. '' These places became sacred sites and were called cuauhtlalli or yaotlalli. Combatants signaled the start of war by burning a large "pyre of paper and incense '' between the armies. Actual battle tactics also differed from typical warfare. In typical warfare, the Aztecs used atlatl darts, stones, and other ranged weapons to weaken enemy forces from afar. However, in flower wars, the Aztecs neglected to use ranged weapons and instead used weapons such as the macuahuitl that required skill and close proximity to the enemy. The use of these kinds of weapons allowed the Aztecs to display their individual combat ability, which was an important part of the flower war.
Flower wars involved fewer soldiers than typical Aztec wars did. A larger proportion of the soldiers would be drawn from nobility than during a typical war. These characteristics allowed the Aztecs to engage in flower wars during any time of the year. In contrast, the Aztecs could fight larger wars of conquest only from late autumn to early spring, because Aztec citizens were needed for farming purposes during the rest of the year. Additionally, flower wars differed from typical wars in that there were equal numbers of soldiers on each side of the battle; this was also related to the Aztecs wanting to show off their military prowess.
Flower wars were generally less lethal than typical wars, but a long - running flower war could become increasingly deadly over time. For example, in a long - running flower war between the Aztecs and the Chalcas, there were few battle deaths at the start. After time had passed, captured commoners started to be killed, but captured nobles were frequently released; sacrifice was not always the fate of captives. However, after further time had passed, captive nobles were killed along with the commoners. This increased the cost of the flower war for both the Aztecs and the Chalcas. Interestingly, the Aztecs considered flower war death to be more noble than dying in a typical war; this can be seen in the word for a flower war death, xochimiquiztli, which translates to "flowery death, blissful death, fortunate death. '' Further, the Aztecs thought that those who died in a flower war would be transported to the heaven where Huitzilopochtli (the supreme god of sun, fire, and war) lived.
There appear to be a variety of reasons that the Aztecs engaged in flower wars. Historians have thought that flower wars were fought for purposes including combat training and capturing humans for religious sacrifice. Historians note evidence of the sacrifice motive: one of Cortez 's captains, Andres de Tapia, once asked Moctezuma II why the stronger Aztec Empire had not yet conquered the nearby state of Tlaxcala outright. The emperor responded by saying that although they could have if they had wanted to, the Aztecs had not done so because war with Tlaxcala was a convenient way of gathering sacrifices and training their own soldiers. However, scholars such as Frederic Hicks question that the main purpose of the flower war was to gain sacrifices. Tlaxcalan historian Munoz Camargo noted that the Aztecs would often besiege Tlaxcalan towns and cut off trade, which was uncharacteristic of a typical flower war. For this reason, proponents of Hicks ' idea believe that the Aztecs did want to conquer the Tlaxcalans, but that they simply could not for some reason.
Despite many scholars ' doubts about the sacrifice motive of the flower war, Hicks asserts that Moctezuma II 's explanations of the flower war (gaining sacrifices and combat training) were logical, given that the Aztecs did place a heavy importance on both sacrifice and martial ability. Fighting in actual warfare was a mandatory part of training for warriors of the noble class, and it was heavily encouraged for warriors of lower classes as well. Given these factors, Hicks suggests that Moctezuma II 's stated reasons may have been genuine and not just an excuse for military failure.
However, some scholars have suggested that the flower war served purposes beyond gaining sacrifices and combat training. For example, Hassig states that for the Aztecs, "flower wars were an efficient means of continuing a conflict that was too costly to conclude immediately. '' As such, a purpose of these wars was to occupy and wear down the enemy 's fighting force. By requiring an equal number of soldiers on each side, the Aztecs made the battle seem balanced at first; however, the side with fewer overall troops suffered more because the losses comprised a greater percentage of their total forces. Through this, the Aztecs used the flower wars to weaken their opponents. Furthermore, since fewer soldiers took part in flower war as compared to a traditional war, the practice of flower war allowed the Aztecs to hold a potential threat at bay while focusing the bulk of their forces elsewhere.
Another purpose of the flower war, according to Hassig, was to show the superiority of Aztec troops. This was another reason that equal numbers of troops were used. If the Aztecs tried to use numerical superiority, their enemy would resort to the kind of defensive tactics that the Aztecs had trouble fighting against. With equal numbers, the enemy would fight the Aztecs on the open field, where individual soldiers had a greater chance of showing off their martial ability. Finally, according to Hassig, "propaganda was perhaps the most significant purpose of flower wars. '' By engaging their opponents in the flower war, the Aztecs were able to continuously showcase their force, which warned other city - states about their power. If the Aztecs made enough of a show of force, it could encourage the allies of the Aztecs ' enemies to change their allegiance.
|
who was indias first president when the first nuclear test was conducted | Smiling Buddha - Wikipedia
Smiling Buddha (MEA designation: Pokhran - I) was the assigned code name of India 's first successful nuclear bomb test on 18 May 1974. The bomb was detonated on the army base, Pokhran Test Range (PTR), in Rajasthan by the Indian Army under the supervision of several key Indian generals.
Pokhran - I was also the first confirmed nuclear weapons test by a nation outside the five permanent members of the United Nations Security Council. Officially, the Indian Ministry of External Affairs (MEA) claimed this test was a "peaceful nuclear explosion '', but it was an accelerated nuclear programme.
India started its own nuclear programme in 1944 when Homi Jehangir Bhabha founded the Tata Institute of Fundamental Research. Physicist Raja Ramanna played an essential role in nuclear weapons technology research; he expanded and supervised scientific research on nuclear weapons and was the first directing officer of the small team of scientists that supervised and carried out the test.
After Indian independence from the British Empire, Indian Prime Minister Jawaharlal Nehru authorised the development of a nuclear programme headed by Homi Bhabha. The Atomic Energy Act of 1948 focused on peaceful development. India was heavily involved in the development of the Nuclear Non-Proliferation Treaty, but ultimately opted not to sign it.
We must develop this atomic energy quite apart from war -- indeed I think we must develop it for the purpose of using it for peaceful purposes... Of course, if we are compelled as a nation to use it for other purposes, possibly no pious sentiments of any of us will stop the nation from using it that way.
In 1954, Homi jehangir Bhabha steered the nuclear programme in the direction of weapons design and production. Two important infrastructure projects were commissioned. The first established Trombay Atomic Energy Establishment at Mumbai (Bombay). The other created a governmental secretariat, Department of Atomic Energy (DAE), of which Bhabha was the first secretary. From 1954 to 1959, the nuclear programme grew swiftly. By 1958, the DAE had one - third of the defence budget for research purposes. In 1954, India reached a verbal understanding with Canada and the United States under the Atoms for Peace programme; Canada and the United States ultimately agreed to provide and establish the CIRUS research reactor also at Trombay. The acquisition of CIRUS was a watershed event in nuclear proliferation with the understanding between India and the United States that the reactor would be used for peaceful purposes only. CIRUS was an ideal facility to develop a plutonium device, and therefore Nehru refused to accept nuclear fuel from Canada and started the programme to develop an indigenous nuclear fuel cycle.
In July 1958, Nehru authorised "Project Phoenix '' to build a reprocessing plant with a capacity of 20 tons of fuel a year -- a size to match the production capacity of CIRUS. The plant used the PUREX process and was designed by the American firm Vitro International. Construction of the plutonium plant began at Trombay on 27 March 1961, and it was commissioned in mid-1964.
The nuclear programme continued to mature, and by 1960, Nehru made the critical decision to move the programme into production. At about the same time, Nehru held discussions with the American firm Westinghouse Electric to construct India 's first nuclear power plant in Tarapur, Maharashtra. Kenneth Nichols, a US Army engineer, recalls from a meeting with Nehru, "it was that time when Nehru turned to Bhabha and asked Bhabha for the timeline of the development of a nuclear weapon ''. Bhabha estimated he would need about a year to accomplish the task.
By 1962, the nuclear programme was still developing, but at a slow rate. Nehru was distracted by the Sino - Indian War, during which India lost territory to China. Nehru turned to the Soviet Union for help, but the Soviet Union was preoccupied with the Cuban Missile Crisis. The Soviet Politburo turned down Nehru 's request for arms and continued backing the Chinese. India concluded that the Soviet Union was an unreliable ally, and this conclusion strengthened India 's determination to create a nuclear deterrent. Design work began in 1965 under Bhabha and proceeded under Raja Ramanna who took over the programme after the former 's death.
Bhabha was now aggressively lobbying for nuclear weapons and made several speeches on Indian radio. In 1964, Bhabha told the Indian public via Indian radio that "such nuclear weapons are remarkably cheap '' and supported his arguments by referring to the economical cost of American nuclear testing programme (Plowshare). Bhabha stated to the politicians that a 10 kt device would cost around $350,000, and $600,000 for a 2 mt. From this, he estimated that "a stockpile '' of around 50 atomic bombs would cost under $21 million and a stockpile of 50 two - megaton hydrogen bombs would cost around $31.5 million. '' Bhabha did not realise, however, that the U.S. Plowshare cost - figures were produced by a vast industrial complex costing tens of billions of dollars, which had already manufactured nuclear weapons numbering in the tens of thousands. The delivery systems for nuclear weapons typically cost several times as much as the weapons themselves.
The nuclear programme was partially slowed down when Lal Bahadur Shastri became the prime minister. In 1965, Shastri faced another war with Pakistan. Shastri appointed physicist Vikram Sarabhai as the head of the nuclear programme but because of his Gandhian beliefs Sarabhai directed the programme toward peaceful purposes rather than military development.
In 1967, Indira Gandhi became the prime minister and work on the nuclear programme resumed with renewed vigour. Homi Sethna, a chemical engineer, played a significant role in the development of weapon - grade plutonium while Ramanna designed and manufactured the whole nuclear device. The first nuclear bomb project did not employ more than 75 scientists because of its sensitivity. The nuclear weapons programme was now directed towards the production of plutonium rather than uranium.
In 1968 -- 69, P.K. Iyengar visited the Soviet Union with three colleagues and toured the nuclear research facilities at Dubna, Russia. During his visit, Iyengar was impressed by the plutonium - fueled pulsed fast reactor. Upon his return to India, Iyengar set about developing plutonium reactors approved by the Indian political leadership in January 1969. The secret plutonium plant was known as Purnima, and construction began in March 1969. The plant 's leadership included Iyengar, Ramanna, Homi Sethna, and Sarabhai. Sarabhai 's presence indicates that, with or without formal approval, the work on nuclear weapons at Trombay had been commenced.
India continued to harbour ambivalent feelings about nuclear weapons, and accorded low priority to their production until the Indo - Pakistani War of 1971. In December 1971, Richard Nixon sent a carrier battle group led by the USS Enterprise (CVN - 65) into the Bay of Bengal in an attempt to intimidate India. The Soviet Union responded by sending a submarine armed with nuclear missiles from Vladivostok to trail the US task force. The Soviet response demonstrated the deterrent value and significance of nuclear weapons and ballistic missile submarines to Indira Gandhi. India gained the military and political initiative over Pakistan after acceding to the treaty that divided Pakistan into two different political entities.
On 7 September 1972, near the peak of her post-war popularity, Indira Gandhi authorised the Bhabha Atomic Research Centre (BARC) to manufacture a nuclear device and prepare it for a test. Although the Indian Army was not fully involved in the nuclear testing, the army 's highest command was kept fully informed of the test preparations. The preparations were carried out under the watchful eyes of the Indian political leadership, with civilian scientists assisting the Indian Army.
The device was formally called the "Peaceful Nuclear Explosive '', but it was usually referred to as the Smiling Buddha. The device was detonated on 18 May 1974, Buddha Jayanti (a festival day in India marking the birth of Gautama Buddha). Indira Gandhi maintained tight control of all aspects of the preparations of the Smiling Buddha test, which was conducted in extreme secrecy; besides Gandhi, only advisers Parmeshwar Haksar and Durga Dhar were kept informed. Scholar Raj Chengappa asserts the Indian Defence Minister Jagjivan Ram was not provided with any knowledge of this test and came to learn of it only after it was conducted. Swaran Singh, the Minister of External Affairs, was given 48 hours advance notice. The Indira Gandhi administration employed no more than 75 civilian scientists while General G.G. Bewoor, Indian army chief, and the commander of Indian Western Command were the only military commanders kept informed.
The head of this entire nuclear bomb project was the director of the BARC, Raja Ramanna. In later years, his role in the nuclear programme would be more deeply integrated as he remained head of the nuclear programme most of his life. The designer and creator of the bomb was P.K. Iyengar, who was the second in command of this project. Iyengar 's work was further assisted by the chief metallurgist, R. Chidambaram, and by Nagapattinam Sambasiva Venkatesan of the Terminal Ballistics Research Laboratory, who developed and manufactured the high explosive implosion system. The explosive materials and the detonation system were developed by Waman Dattatreya Patwardhan of the High Energy Materials Research Laboratory.
The overall project was supervised by chemical engineer Homi Sethna, Chairman of the Atomic Energy Commission of India. Chidambaram, who would later coordinate work on the Pokhran - II tests, began work on the equation of state of plutonium in late 1967 or early 1968. To preserve secrecy, the project employed no more than 75 scientists and engineers from 1967 -- 74. Abdul Kalam also arrived at the test site as the representative of the DRDO.
The device was of the implosion - type design and had a close resemblance to the American nuclear bomb called the Fat Man. The implosion system was assembled at the Terminal Ballistics Research Laboratory (TBRL) of the DRDO in Chandigarh. The detonation system was developed at the High Energy Materials Research Laboratory (HEMRL) of the DRDO in Pune, Maharashtra State. The 6 kg of plutonium came from the CIRUS reactor at BARC. The neutron initiator was of the polonium -- beryllium type and code - named Flower. The complete nuclear bomb was engineered and finally assembled by Indian engineers at Trombay before transportation to the test site.
The fully assembled device had a hexagonal cross section, 1.25 metres in diameter, and weighed 1400 kg. The device was mounted on a hexagonal metal tripod, and was transported to the shaft on rails which the army kept covered with sand. The device was detonated when Dastidar pushed the firing button at 8.05 a.m.; it was in a shaft 107 m under the army Pokhran test range in the Thar Desert (or Great Indian Desert), Rajasthan.
The nuclear yield of this test still remains controversial, with unclear data provided by Indian sources, although Indian politicians have given the country 's press a range from 2 kt to 20 kt. The official yield was initially set at 12 kt; post-Operation Shakti claims have raised it to 13 kt. Independent seismic data from outside and analysis of the crater features indicate a lower figure. Analysts usually estimate the yield at 4 to 6 kt, using conventional seismic magnitude - to - yield conversion formulas. In recent years, both Homi Sethna and P.K. Iyengar have conceded the official yield to be an exaggeration.
Iyengar has variously stated that the yield was 8 -- 10 kt, that the device was designed to yield 10 kt, and that the yield was 8 kt "exactly as predicted ''. Although seismic scaling laws lead to an estimated yield range between 3.2 kt and 21 kt, an analysis of hard rock cratering effects suggests a narrow range of around 8 kt for the yield, which is within the uncertainties of the seismic yield estimate.
Indian Prime Minister Indira Gandhi had already gained much popularity and publicity after her successful military campaign against Pakistan in the 1971 war. The test caused an immediate revival of Indira Gandhi 's popularity, which had flagged considerably from its high after the 1971 war. The overall popularity and image of the Congress Party was enhanced and the Congress Party was well received in the Indian Parliament. In 1975, Homi Sethna, a chemical engineer and the chairman of the Indian Atomic Energy Commission (AECI), Raja Ramanna of BARC, and Basanti Nagchaudhuri of DRDO, all were honoured with the Padma Vibhushan, India 's second highest civilian award. Five other project members received the Padma Shri, India 's fourth highest civilian award. India consistently maintained that this was a peaceful nuclear bomb test and that it had no intentions of militarising its nuclear programme. However, according to independent monitors, this test was part of an accelerated Indian nuclear programme. In 1997 Raja Ramanna, speaking to the Press Trust of India, maintained:
The Pokhran test was a bomb, I can tell you now... An explosion is an explosion, a gun is a gun, whether you shoot at someone or shoot at the ground... I just want to make clear that the test was not all that peaceful.
While India continued to state that the test was for peaceful purposes, it encountered opposition from many quarters. The Nuclear Suppliers Group (NSG) was formed in reaction to the Indian tests to check international nuclear proliferation. The NSG decided in 1992 to require full - scope IAEA safeguards for any new nuclear export deals, which effectively ruled out nuclear exports to India, but in 2008 it waived this restriction on nuclear trade with India as part of the Indo - US civilian nuclear agreement.
Pakistan did not view the test as a "peaceful nuclear explosion '', and cancelled talks scheduled for 10 June on normalisation of relations. Pakistan 's Prime Minister Zulfikar Ali Bhutto vowed in June 1974 that he would never succumb to "nuclear blackmail '' or accept "Indian hegemony or domination over the subcontinent ''. The chairman of the Pakistan Atomic Energy Commission, Munir Ahmed Khan, said that the test would force Pakistan to test its own nuclear bomb. Pakistan 's leading nuclear physicist, Pervez Hoodbhoy, stated in 2011 that he believed the test "pushed (Pakistan) further into the nuclear arena ''.
The plutonium used in the test was created in the CIRUS reactor supplied by Canada and using heavy water supplied by the United States. Both countries reacted negatively, especially in light of then ongoing negotiations on the Nuclear Non-Proliferation Treaty and the economic aid both countries had provided to India. Canada concluded that the test violated a 1971 understanding between the two states, and froze nuclear energy assistance for the two heavy water reactors then under construction. The United States concluded that the test did not violate any agreement and proceeded with a June 1974 shipment of enriched uranium for the Tarapur reactor.
France sent a congratulatory telegram to India but later withdrew it.
Despite many proposals, India did not carry out further nuclear tests until 1998. After the 1998 general elections, Operation Shakti (also known as Pokhran - II) was carried out at the Pokhran test site, using devices designed and built over the preceding two decades.
|
where did the pilgrims migrate to before they came to north america | Pilgrims (Plymouth Colony) - wikipedia
The Pilgrims or Pilgrim Fathers were early European settlers of the Plymouth Colony in present - day Plymouth, Massachusetts, United States. The Pilgrims ' leadership came from the religious congregations of Brownist separatist Puritans who had fled the volatile political environment in England for the relative calm and tolerance of 17th - century Holland in the Netherlands. They held Puritan Calvinist religious beliefs but, unlike other Puritans, they maintained that their congregations needed to be separated from the English state church. They were also concerned that they might lose their English cultural identity if they remained in the Netherlands, so they arranged with English investors to establish a new colony in North America. The colony was established in 1620 and became the second successful English settlement in North America (after the founding of Jamestown, Virginia in 1607). The Pilgrims ' story became a central theme of the history and culture of the United States.
The core of the group that came to be known as the Pilgrims were brought together between 1586 and 1605 by shared theological beliefs, as expressed by Richard Clyfton, a Brownist parson at All Saints ' Parish Church in Babworth, near East Retford, Nottinghamshire. This congregation held Puritan beliefs comparable to other non-conforming movements (i.e., groups not in communion with the Church of England) led by Robert Browne, John Greenwood, and Henry Barrowe. As Separatists, they also held that their differences with the Church of England were irreconcilable and that their worship should be independent of the trappings, traditions, and organization of a central church -- unlike those Puritans who maintained their membership in and allegiance to the Church of England.
William Brewster was a former diplomatic assistant to the Netherlands. He was living in the Scrooby manor house while serving as postmaster for the village and bailiff to the Archbishop of York. He had been impressed by Clyfton 's services and had begun participating in services led by John Smyth in Gainsborough, Lincolnshire.
The Puritan Separatists had long been controversial. Under the 1559 Act of Uniformity, it was illegal not to attend official Church of England services unless the church had signed the allegiance to the Church of England, with a fine of one shilling (£ 0.05; about £ 17 today) for each missed Sunday and holy day. The penalties for conducting unofficial services included imprisonment and larger fines. Under the policy of this time, Barrowe and Greenwood were executed for sedition in 1593.
During much of Brewster 's tenure (1595 -- 1606), the Archbishop was Matthew Hutton. He displayed some sympathy to the Puritan cause, writing to Robert Cecil, Secretary of State to James I in 1604:
The Puritans though they differ in Ceremonies and accidentes, yet they agree with us in substance of religion, and I thinke all or the moste parte of them love his Majestie, and the presente state, and I hope will yield to conformitie. But the Papistes are opposite and contrarie in very many substantiall pointes of religion, and can not but wishe the Popes authoritie and popish religion to be established.
Many Puritans had hoped that a reconciliation would be possible when James came to power which would allow them independence, but the Hampton Court Conference of 1604 denied substantially all the concessions which they had requested -- except for an English translation of the Bible. Following the Conference in 1605, Clyfton was declared a non-conformist and stripped of his position at Babworth. Brewster invited him to live at his home.
Archbishop Hutton died in 1606 and Tobias Matthew was appointed as his replacement. He was one of James ' chief supporters at the 1604 conference, and he promptly began a campaign to purge the archdiocese of non-conforming influences, both Puritans and those wishing to return to the Catholic faith. Disobedient clergy were replaced, and prominent Separatists were confronted, fined, and imprisoned. He is credited with driving recusants out of the country, those who refused to attend Anglican services.
At about the same time, Brewster arranged for a congregation to meet privately at the Scrooby manor house. Services were held beginning in 1606 with Clyfton as pastor, John Robinson as teacher, and Brewster as the presiding elder. Shortly after, Smyth and members of the Gainsborough group moved on to Amsterdam. Brewster is known to have been fined £ 20 (about £ 3.96 thousand today) in absentia for his non-compliance with the church. This followed his September 1607 resignation from the postmaster position, about the time that the congregation had decided to follow the Smyth party to Amsterdam.
Scrooby member William Bradford of Austerfield kept a journal of the congregation 's events that later was published as Of Plymouth Plantation. Of this time, he wrote:
But after these things they could not long continue in any peaceable condition, but were hunted & persecuted on every side, so as their former afflictions were but as flea - bitings in comparison of these which now came upon them. For some were taken & clapt up in prison, others had their houses besett & watcht night and day, & hardly escaped their hands; and ye most were faine to flie & leave their howses & habitations, and the means of their livelehood.
The pilgrims moved to the Netherlands in about 1607. They lived in Leiden, Holland, a city of 100,000 inhabitants, residing in small houses behind the "Kloksteeg '' opposite the Pieterskerk. The success of the congregation in Leiden was mixed. Leiden was a thriving industrial center, and many members were well able to support themselves working at Leiden University or in the textile, printing, and brewing trades. Others were less able to bring in sufficient income, hampered by their rural backgrounds and the language barrier; for those, accommodations were made on an estate bought by Robinson and three partners.
Bradford wrote of their years in Leiden:
For these & other reasons they removed to Leyden, a fair & bewtifull citie, and of a sweete situation, but made more famous by ye universitie wherwith it is adorned, in which of late had been so many learned man. But wanting that traffike by sea which Amerstdam injoyes, it was not so beneficiall for their outward means of living & estats. But being now hear pitchet they fell to such trads & imployments as they best could; valewing peace & their spirituall comforte above any other riches whatsoever. And at length they came to raise a competente & comforteable living, but with hard and continuall labor.
William Brewster had been teaching English at the university, and Robinson enrolled in 1615 to pursue his doctorate. There he participated in a series of debates, particularly regarding the contentious issue of Calvinism versus Arminianism (siding with the Calvinists against the Remonstrants). Brewster acquired typesetting equipment about 1616 in a venture financed by Thomas Brewer, and began publishing the debates through a local press.
The Netherlands, however, was a land whose culture and language were strange and difficult for the English congregation to understand or learn. They found the Dutch morals much too libertine, and their children were becoming more and more Dutch as the years passed. The congregation came to believe that they faced eventual extinction if they remained there.
By 1617, the congregation was stable and relatively secure, but there were ongoing issues that needed to be resolved. Bradford noted that many members of the congregation were showing signs of early aging, compounding the difficulties which some had in supporting themselves. A few had spent their savings and so gave up and returned to England. It was feared that more would follow and that the congregation would become unsustainable. The employment issues made it unattractive for others to come to Leiden, and younger members had begun leaving to find employment and adventure elsewhere. Also compelling was the possibility of missionary work, an opportunity that rarely arose in a Protestant stronghold.
Reasons for departure are suggested by Bradford when he notes the "discouragements '' of the hard life which they had in the Netherlands, and the hope of attracting others by finding "a better, and easier place of living ''; the children of the group being "drawn away by evil examples into extravagance and dangerous courses ''; the "great hope, for the propagating and advancing the gospel of the kingdom of Christ in those remote parts of the world. ''
Edward Winslow 's list was similar. In addition to the economic worries and missionary possibilities, he stressed that it was important for the people to retain their English identity, culture, and language. They also believed that the English Church in Leiden could do little to benefit the larger community there.
At the same time, there were many uncertainties about moving to such a place as America. Stories had come back from there about failed colonies. There were fears that the native people would be violent, that there would be no source of food or water, that exposure to unknown diseases was possible, and that travel by sea was always hazardous. Balancing all this was a local political situation that was in danger of becoming unstable. The truce was faltering in what came to be known as the Eighty Years ' War, and there was fear over what the attitudes of Spain might be toward them.
Candidate destinations included Guiana, where the Dutch had already established Essequibo, or somewhere near the existing Virginia settlements. Virginia was an attractive destination because the presence of the older colony might offer better security and trade opportunities. It was thought, however, that they should not settle too near, since that might too closely duplicate the political environment back in England. The London Company administered a territory of considerable size in the region. The intended settlement location was at the mouth of the Hudson River. This made it possible to settle at a distance which allayed concerns of social, political, and religious conflicts, but still provided the military and economic benefits of relative closeness to an established colony.
Robert Cushman and John Carver were sent to England to solicit a land patent. Their negotiations were delayed because of conflicts internal to the London Company, but ultimately a patent was secured in the name of John Wincob on June 9 (Old Style) / June 19 (New Style), 1619. The charter was granted with the king 's condition that the Leiden group 's religion would not receive official recognition.
Preparations stalled because of the continued problems within the London Company. Competing Dutch companies approached the congregation and discussed with them the possibility of settling in the Hudson River area.
David Baeckelandt suggests that the Leiden group was approached by Englishman Matthew Slade, son - in - law of Petrus Placius, a cartographer for the Dutch East India Company. Slade was also a spy for the English Ambassador. The Puritans ' plans were therefore known both at court and among influential investors in the Virginia Company 's colony at Jamestown. Negotiations were broken off with the Dutch at the encouragement of English merchant Thomas Weston, who assured them that he could resolve the London Company delays. The London Company intended to claim the area explored by Hudson before the Dutch could become fully established. The first Dutch settlers did not arrive in the area until 1624.
Weston did come with a substantial change, telling the Leiden group that parties in England had obtained a land grant north of the existing Virginia territory to be called New England. This was only partially true; the new grant did come to pass, but not until late in 1620 when the Plymouth Council for New England received its charter. It was expected that this area could be fished profitably, and it was not under the control of the existing Virginia government.
A second change was known only to parties in England who chose not to inform the larger group. New investors had been brought into the venture who wanted the terms altered so that, at the end of the seven - year contract, half of the settled land and property would revert to the investors. Also, there had been a provision which allowed each settler to have two days per week to work on personal business, but this provision had been dropped from the agreement without the knowledge of the Puritans.
Amid these negotiations, William Brewster found himself involved with religious unrest emerging in Scotland. In 1618, King James had promulgated the Five Articles of Perth which were seen in Scotland as an attempt to encroach on their Presbyterian tradition. Pamphlets critical of this law were published by Brewster and smuggled into Scotland by April 1619. These pamphlets were traced back to Leiden, and a failed attempt to apprehend Brewster was made in July when his presence in England became known.
Also in July in Leiden, English ambassador Dudley Carleton became aware of the situation and began leaning on the Dutch government to extradite Brewster. An arrest was made in September, but only Thomas Brewer the financier was in custody. Brewster 's whereabouts remain unknown between then and the colonists ' departure. Brewster 's type was seized. After several months of delay, Brewer was sent to England for questioning, where he stonewalled government officials until well into 1620. One resulting concession that England did obtain from the Netherlands was a restriction on the press, making such publications illegal to produce.
Thomas Brewer was ultimately convicted in England for his continued religious publication activities and sentenced in 1626 to a fourteen - year prison term.
Not all of the congregation were able to depart on the first trip. Many members were not able to settle their affairs within the time constraints, and the budget was limited for travel and supplies. It was decided that the initial settlement should be undertaken primarily by younger and stronger members. The remainder agreed to follow if and when they could.
Robinson would remain in Leiden with the larger portion of the congregation, and Brewster was to lead the American congregation. The church in America would be run independently, but it was agreed that membership would automatically be granted in either congregation to members who moved between the continents.
With personal and business matters agreed upon, supplies and a small ship were procured. Speedwell was to bring some passengers from the Netherlands to England, then on to America where it would be kept for the fishing business, with a crew hired for support services during the first year. The larger ship Mayflower was leased for transport and exploration services.
The Speedwell was originally named Swiftsure. It was built in 1577 at sixty tons, and was part of the English fleet that defeated the Spanish Armada. It departed Delfshaven in July 1620 with the Leiden colonists, after a canal ride from Leyden of about seven hours. It reached Southampton, Hampshire and met with the Mayflower and the additional colonists hired by the investors. With final arrangements made, the two vessels set out on August 5 (Old Style) / August 15 (New Style).
Soon thereafter, the Speedwell crew reported that their ship was taking in water, so both were diverted to Dartmouth, Devon. There it was inspected for leaks and sealed, but a second attempt to depart also failed, bringing them only as far as Plymouth, Devon. It was decided that Speedwell was untrustworthy, and it was sold; the ship 's master and some of the crew transferred to the Mayflower for the trip. William Bradford observed that the Speedwell seemed "overmasted '', thus putting a strain on the hull; and he attributed her leaking to crew members who had deliberately caused it, allowing them to abandon their year - long commitments. Passenger Robert Cushman wrote that the leaking was caused by a loose board.
Of the 120 combined passengers, 102 were chosen to travel on the Mayflower with the supplies consolidated. Of these, about half had come by way of Leiden, and about 28 of the adults were members of the congregation. The reduced party finally sailed successfully on September 6 (Old Style) / September 16 (New Style), 1620.
Initially the trip went smoothly, but under way they were met with strong winds and storms. One of these caused a main beam to crack, and the possibility was considered of turning back, even though they were more than halfway to their destination. However, they repaired the ship sufficiently to continue using a "great iron screw '' brought along by the colonists (probably a jack to be used for either house construction or a cider press). Passenger John Howland was washed overboard in the storm but caught a top - sail halyard trailing in the water and was pulled back on board.
One crew member and one passenger died before they reached land. A child was born at sea and named Oceanus.
Land was sighted on November 9, 1620. The passengers had endured miserable conditions for about 65 days, and they were led by William Brewster in Psalm 100 as a prayer of thanksgiving. It was confirmed that the area was Cape Cod within the New England territory recommended by Weston. An attempt was made to sail the ship around the cape towards the Hudson River, also within the New England grant area, but they encountered shoals and difficult currents around Cape Malabar (the old French name for Monomoy Island). They decided to turn around, and the ship was anchored in Provincetown Harbor by November 11 / 12.
The charter was incomplete for the Plymouth Council for New England when the colonists departed England (it was granted while they were in transit on November 3 / 13). They arrived without a patent; the older Wincob patent was from their abandoned dealings with the London Company. Some of the passengers, aware of the situation, suggested that they were free to do as they chose upon landing, without a patent in place, and to ignore the contract with the investors.
A brief contract was drafted to address this issue, later known as the Mayflower Compact, promising cooperation among the settlers "for the general good of the Colony unto which we promise all due submission and obedience. '' It organized them into what was called a "civill body politick, '' in which issues would be decided by voting, the key ingredient of democracy. It was ratified by majority rule, with 41 adult male Pilgrims signing for the 102 passengers (73 males and 29 females). Included in the company were 19 male servants and three female servants, along with some sailors and craftsmen hired for short - term service to the colony. At this time, John Carver was chosen as the colony 's first governor. It was Carver who had chartered the Mayflower and his is the first signature on the Mayflower Compact, being the most respected and affluent member of the group. The Mayflower Compact was the seed of American democracy and has been called the world 's first written constitution.
Thorough exploration of the area was delayed for more than two weeks because the shallop or pinnace (a smaller sailing vessel) which they brought had been partially dismantled to fit aboard the Mayflower and was further damaged in transit. Small parties, however, waded to the beach to fetch firewood and attend to long - deferred personal hygiene.
Exploratory parties were undertaken while awaiting the shallop, led by Myles Standish (an English soldier whom the colonists had met while in Leiden) and Christopher Jones. They encountered an old European - built house and iron kettle, left behind by some ship 's crew, and a few recently cultivated fields, showing corn stubble.
They came upon an artificial mound near the dunes which they partially uncovered and found to be an Indian grave. Farther along, a similar mound was found, more recently made, and they discovered that some of the burial mounds also contained corn. The colonists took some of the corn, intending to use it as seed for planting, while they reburied the rest. William Bradford later recorded in his book Of Plymouth Plantation that, after the shallop had been repaired,
They also found two of the Indian 's houses covered with mats, and some of their implements in them; but the people had run away and could not be seen. Without permission they took more corn, and beans of various colours. These they brought away, intending to give them full satisfaction (payment) when they should meet with any of them, -- as about six months afterwards they did.
And it is to be noted as a special providence of God, and a great mercy to this poor people, that they thus got seed to plant corn the next year, or they might have starved; for they had none, nor any likelihood of getting any, till too late for the planting season.
By December, most of the passengers and crew had become ill, coughing violently. Many were also suffering from the effects of scurvy. There had already been ice and snowfall, hampering exploration efforts; half of them died during the first winter.
Explorations resumed on December 6 / 16. The shallop party headed south along the cape, consisting of seven colonists from Leiden, three from London, and seven crew; they chose to land at the area inhabited by the Nauset people (the area around Brewster, Chatham, Eastham, Harwich, and Orleans) where they saw some people on the shore who fled when they approached. Inland they found more mounds, one containing acorns, which they exhumed and left, and more graves, which they decided not to dig.
They remained ashore overnight and heard cries near the encampment. The following morning, they were attacked by Indians who shot at them with arrows. The colonists retrieved their firearms and shot back, then chased them into the woods but did not find them. There was no more contact with Indians for several months.
The local Indians were already familiar with the English, who had intermittently visited the area for fishing and trade before Mayflower arrived. In the Cape Cod area, relations were poor following a visit several years earlier by Thomas Hunt. Hunt kidnapped 20 people from Patuxet (the site of Plymouth Colony) and another seven from Nausett, and he attempted to sell them as slaves in Europe. One of the Patuxet abductees was Squanto, who became an ally of the Plymouth Colony.
The Pokanokets also lived nearby and had developed a particular dislike for the English after one group came in, captured numerous people, and shot them aboard their ship. By this time, there had already been reciprocal killings at Martha 's Vineyard and Cape Cod. But during one of the captures by the English, Squanto escaped to England and there became a Christian. When he came back, he found that most of his tribe had died from plague.
Continuing westward, the shallop 's mast and rudder were broken by storms and the sail was lost. They rowed for safety, encountering the harbor formed by Duxbury and Plymouth barrier beaches and stumbling on land in the darkness. They remained at this spot for two days to recuperate and repair equipment. They named it Clark 's Island for a Mayflower mate who first set foot on it.
They resumed exploration on Monday, December 11 / 21 when the party crossed over to the mainland and surveyed the area that ultimately became the settlement. The anniversary of this survey is observed in Massachusetts as Forefathers ' Day and is traditionally associated with the Plymouth Rock landing tradition. This land was especially suited to winter building because it had already been cleared, and the tall hills provided a good defensive position.
The cleared village was known as Patuxet to the Wampanoag people and was abandoned about three years earlier following a plague that killed all of its residents. The "Indian fever '' involved hemorrhaging and is assumed to have been fulminating smallpox. The outbreak had been severe enough that the colonists discovered unburied skeletons in the dwellings.
The exploratory party returned to the Mayflower, anchored twenty - five miles (40 km) away, having been brought to the harbor on December 16 / 26. Only nearby sites were evaluated, with a hill in Plymouth (so named on earlier charts) chosen on December 19 / 29.
Construction commenced immediately, with the first common house nearly completed by January 9 / 19, 20 feet square and built for general use. At this point, each single man was ordered to join himself to one of the 19 families in order to eliminate the need to build any more houses than absolutely necessary. Each extended family was assigned a plot one - half rod wide and three rods long for each household member, then each family built its own dwelling. Supplies were brought ashore, and the settlement was mostly complete by early February.
When the first house was finished, it immediately became a hospital for the ill Pilgrims. Thirty - one of the company were dead by the end of February, with deaths still rising. Coles Hill became the first cemetery, on a prominence above the beach, and the graves were allowed to overgrow with grass for fear that the Indians would discover how weakened the settlement had actually become.
Between the landing and March, only 47 colonists had survived the diseases that they contracted on the ship. During the worst of the sickness, only six or seven of the group were able to feed and care for the rest. In this time, half the Mayflower crew also died.
William Bradford became governor in 1621 upon the death of John Carver. On March 22, 1621, the Pilgrims of Plymouth Colony signed a peace treaty with Massasoit of the Wampanoags. The patent of Plymouth Colony was surrendered by Bradford to the freemen in 1640, minus a small reserve of three tracts of land. Bradford served for 11 consecutive years, and was elected to various other terms until his death in 1657.
The colony contained Bristol County, Plymouth County, and Barnstable County, Massachusetts. The Massachusetts Bay Colony was reorganized and issued a new charter as the Province of Massachusetts Bay in 1691, and Plymouth ended its history as a separate colony.
The first use of the word "pilgrims '' for the Mayflower passengers appeared in William Bradford 's Of Plymouth Plantation. As he finished recounting his group 's July 1620 departure from Leiden, he used the imagery of Hebrews 11: 13 -- 16 about Old Testament "strangers and pilgrims '' who had the opportunity to return to their old country but instead longed for a better, heavenly country.
So they lefte (that) goodly & pleasante citie, which had been ther resting place, nere 12 years; but they knew they were pilgrimes, & looked not much on these things; but lift up their eyes to y heavens, their dearest cuntrie, and quieted their spirits.
There is no record of the term Pilgrims being used to describe Plymouth 's founders for 150 years after Bradford wrote this passage, except when quoting him. The Mayflower 's story was retold by historians Nathaniel Morton (in 1669) and Cotton Mather (in 1702), and both paraphrased Bradford 's passage and used his word pilgrims. At Plymouth 's Forefathers ' Day observance in 1793, Rev. Chandler Robbins recited this passage.
The name Pilgrims was probably not in popular use before about 1798, even though Plymouth celebrated Forefathers ' Day several times between 1769 and 1798 and used a variety of terms to honor Plymouth 's founders. Pilgrims was not mentioned, other than in Robbins ' 1793 recitation. The first documented use that was not simply quoting Bradford was at a December 22, 1798 celebration of Forefathers ' Day in Boston. A song composed for the occasion used the word Pilgrims, and the participants drank a toast to "The Pilgrims of Leyden ''. The term was used prominently during Plymouth 's next Forefather 's Day celebration in 1800, and was used in Forefathers ' Day observances thereafter.
By the 1820s, the term Pilgrims was becoming more common. Daniel Webster repeatedly referred to "the Pilgrims '' in his December 22, 1820 address for Plymouth 's bicentennial, which was widely read. Harriet Vaughan Cheney used it in her 1824 novel A Peep at the Pilgrims in Sixteen Thirty - Six, and the term also gained popularity with the 1825 publication of Felicia Hemans 's classic poem "The Landing of the Pilgrim Fathers ''.
|
when was the 18th amendment passed and ratified | Eighteenth amendment to the United States Constitution - wikipedia
The Eighteenth Amendment (Amendment XVIII) of the United States Constitution effectively established the prohibition of alcoholic beverages in the United States by declaring the production, transport, and sale of alcohol (though not the consumption or private possession) illegal. The separate Volstead Act set down methods for enforcing the Eighteenth Amendment, and defined which "intoxicating liquors '' were prohibited, and which were excluded from prohibition (e.g., for medical and religious purposes). The Amendment was the first to set a time delay before it would take effect following ratification, and the first to set a time limit for its ratification by the states. President Woodrow Wilson vetoed the bill, but the House of Representatives overrode the veto, and the Senate did so as well the next day. The Volstead Act set the starting date for nationwide prohibition for January 17, 1920, which was the earliest day allowed by the Eighteenth Amendment.
The Amendment was in effect for the following 13 years. It was repealed in 1933 by ratification of the Twenty - First Amendment. The Twenty - first Amendment was ratified on December 5, 1933. It is unique among the 27 amendments of the U.S. Constitution for being the only one to repeal a prior amendment and to have been ratified by state ratifying conventions.
Section 1. After one year from the ratification of this article the manufacture, sale, or transportation of intoxicating liquors within, the importation thereof into, or the exportation thereof from the United States and all the territory subject to the jurisdiction thereof for beverage purposes is hereby prohibited.
Section 2. The Congress and the several States shall have concurrent power to enforce this article by appropriate legislation.
Section 3. This article shall be inoperative unless it shall have been ratified as an amendment to the Constitution by the legislatures of the several States, as provided in the Constitution, within seven years from the date of the submission hereof to the States by the Congress.
The Eighteenth Amendment was the result of decades of effort by the temperance movement in the United States and at the time was generally considered a progressive amendment. Starting in 1906, the Anti-Saloon League (ASL) began leading a campaign to ban the sale of alcohol on a state level. They led speeches, advertisements, and public demonstrations, claiming that banning the sale of alcohol would get rid of poverty and social issues, such as immoral behavior and violence. It would also inspire new forms of sociability between men and women and they believed that families would be happier, fewer industrial mistakes would be made and overall, the world would be a better place. Other groups such as the Women 's Christian Temperance Union began as well trying to ban the sale, manufacturing, and distribution of alcoholic beverages. A well - known reformer during this time period was Carrie Amelia Moore Nation, whose violent actions (such as vandalizing saloon property) made her a household name across America. Many state legislatures had already enacted statewide prohibition prior to the ratification of the Eighteenth Amendment but did not ban the consumption of alcohol in most households. It took some states longer than others to ratify this amendment, especially northern states such as: New York, New Jersey, and Massachusetts. They violated the law by still allowing some wines and beers to be sold. By 1916, 23 of 48 states had already passed laws against saloons, some even banning the manufacture of alcohol in the first place.
The Temperance Movement was dedicated to the complete abstinence of alcohol from public life. The movement began in the early 1800s within the church, and was very religiously motivated. The central areas the group was founded out of were in the Saratoga area of New York, as well as in Massachusetts. Churches were also highly influential in gaining new members and support, garnering 6,000 local societies in several different states.
A group that was inspired by the movement was the Anti-Saloon league, who at the turn of the 20th century began heavily lobbying for prohibition in the United States. The group was founded in 1893 in the state of Ohio, gaining massive support from Evangelical Protestants, to becoming a national organization in 1895. The group was successful in helping implement prohibition, through heavy lobbying and having a vast influence. The group following repeal of prohibition fell out of power and in 1950 merged with other groups forming the National Temperance League.
On August 1, 1917, the Senate passed a resolution containing the language of the amendment to be presented to the states for ratification. The vote was 65 to 20, with the Democrats voting 36 in favor and 12 in opposition; and the Republicans voting 29 in favor and 8 in opposition. The House of Representatives passed a revised resolution on December 17, 1917. This was the first amendment to impose a date in which it had to be ratified in which if the deadline was not met, the amendment would be discarded.
In the House, the vote was 282 to 128, with the Democrats voting 141 in favor and 64 in opposition; and the Republicans voting 137 in favor and 62 in opposition. Four Independents in the House voted in favor and two Independents cast votes against the amendment. It was officially proposed by the Congress to the states when the Senate passed the resolution, by a vote of 47 to 8, the next day, December 18.
The amendment and its enabling legislation did not ban the consumption of alcohol, but made it difficult to obtain alcoholic beverages legally, as it prohibited the sale, manufacturing and distribution of them in U.S. territory. Any one who got caught selling, manufacturing or distributing alcoholic beverages would be arrested. Because prohibition was already implemented by many states, it was quickly ratified into a law. The ratification of the Amendment was completed on January 16, 1919, when Nebraska became the 36th of the 48 states then in the Union to ratify it. On January 29, acting Secretary of State Frank L. Polk certified the ratification.
The following states ratified the amendment:
The following states rejected the amendment:
To define the language used in the Amendment, Congress enacted enabling legislation called the National Prohibition Act, better known as the Volstead Act, on October 28, 1919. President Woodrow Wilson vetoed that bill, but the House of Representatives immediately voted to override the veto and the Senate voted similarly the next day. The Volstead Act set the starting date for nationwide prohibition for January 17, 1920, which was the earliest date allowed by the 18th amendment.
The Volstead Act was passed by Congress as a means of overriding the veto of the 18th amendment put forth by President Woodrow Wilson. The act was conceived and introduced by Wayne Wheeler who was a leader of the Anti-Saloon League, a group which found alcohol responsible for almost all of society 's problems, and were also responsible for many campaigns against the sale of alcohol. The law was also heavily supported by Judiciary Chairman at the time, Andrew Volstead from Minnesota and was named in his honor. The act in its written form laid the ground work of prohibition, defining the procedures for banning the distribution of alcohol including their production and distribution.
Volstead had once before introduced an early version of the law to congress. It was first brought to the floor on May 27, 1919 meeting heavy resistance from Democrat senators, introducing instead what was called the "wet law '', which was an attempt to end the wartime prohibition laws put into affect much earlier. The debate of prohibition would continue to be fueled even longer in congress, for that entire the House would be divided among what would be known as the "bone - drys and the "wets ''.
With Republicans in the majority of the House of Representatives, the act was passed July 22, 1919 with 287 in favor and 100 opposed. Unfortunately the act was in large part a failure, being unable to prevent mass distribution of alcoholic beverages and also inadvertently gave way to massive increase in organized crime. The act would go on to be the standard for enforcing prohibition, until the passing of the 21st amendment in 1933 effectively repealed it.
Source:
Positives:
Negatives:
The proposed amendment was the first to contain a provision setting a deadline for its ratification. That clause of the amendment was challenged, with the case reaching the US Supreme Court. It upheld the constitutionality of such a deadline in Dillon v. Gloss (1921). The Supreme Court also upheld the ratification by the Ohio legislature in Hawke v. Smith (1920), despite a petition requiring that the matter go to ballot.
This was not the only controversy around the amendment. The phrase "intoxicating liquor '' would not logically have included beer and wine, and their inclusion in the prohibition came as a surprise to the general public, as well as wine and beer makers. This controversy caused many Northern states to not abide by which caused some problems. The brewers were probably not the only Americans to be surprised at the severity of the regime thus created. Voters who considered their own drinking habits blameless, but who supported prohibition to discipline others, also received a rude shock. That shock came with the realization that federal prohibition went much farther in the direction of banning personal consumption than all local prohibition ordinances and many state prohibition statutes. National Prohibition turned out to be quite a different beast than its local and state cousins.
Under Prohibition, the illegal manufacture and sale of liquor -- known as "bootlegging '' -- occurred on a large scale across the United States. In urban areas, where the majority of the population opposed Prohibition, enforcement was generally much weaker than in rural areas and smaller towns. Perhaps the most dramatic consequence of Prohibition was the effect it had on organized crime in the United States: as the production and sale of alcohol went further underground, it began to be controlled by the Mafia and other gangs, who transformed themselves into sophisticated criminal enterprises that reaped huge profits from the illicit liquor trade.
When it came to its booming bootleg business, the Mafia became skilled at bribing police and politicians to look the other way. Chicago 's Al Capone emerged as the most notorious example of this phenomenon, earning an estimated $60 million annually from the bootlegging and speakeasy operations he controlled. In addition to bootlegging, gambling and prostitution reached new heights during the 1920s as well. A growing number of Americans came to blame Prohibition for this widespread moral decay and disorder -- despite the fact that the legislation had intended to do the opposite -- and to condemn it as a dangerous infringement on the freedom of the individual.
In his important study both of the Eighteenth Amendment and its repeal, Daniel Okrent identifies the powerful political coalition that worked successfully in the two decades leading to the ratification of the Eighteenth Amendment:
Five distinct, if occasionally overlapping, components made up this unspoken coalition: racists, progressives, suffragists, populists (whose ranks included a small socialist auxiliary), and nativists. Adherents of each group may have been opposed to alcohol for its own sake, but used the Prohibition impulse to advance ideologies and causes that had little to do with it.
If public sentiment had turned against Prohibition by the late 1920s, the advent of the Great Depression only hastened its demise, as some argued that the ban on alcohol denied jobs to the unemployed and much - needed revenue to the government. The efforts of the nonpartisan group Americans Against Prohibition Association (AAPA) added to public disillusionment. In 1932, the platform of Democratic presidential candidate Franklin D. Roosevelt included a plank for repealing the 18th Amendment, and his victory that November marked a certain end to Prohibition.
In February 1933, Congress adopted a resolution proposing the 21st Amendment to the Constitution, which repealed both the 18th Amendment and the Volstead Act. The resolution required state conventions, rather than the state legislatures, to approve the amendment, effectively reducing the process to a one - state, one - vote referendum rather than a popular vote contest. That December, Utah became the 36th state to ratify the amendment, achieving the necessary majority for repeal. A few states continued statewide prohibition after 1933, but by 1966 all of them had abandoned it. Since then, liquor control in the United States has largely been determined at the local level.
Just after the Eighteenth Amendment 's adoption, there was a significant reduction in alcohol consumption among the general public and particularly among low - income groups. There were fewer hospitalizations for alcoholism and likewise fewer liver - related medical problems. However, consumption soon climbed as underworld entrepreneurs began producing "rotgut '' alcohol which was full of dangerous diseases. With the rise of home distilled alcohol, many cases of careless distilling led to the deaths of many citizens. During the ban upwards of 10,000 deaths can be attributed to wood alcohol (methanol) poisoning. Ultimately, during prohibition use and abuse of alcohol ended up higher than before it started. The greatest unintended consequence of Prohibition however, was the plainest to see. For over a decade, the law that was meant to foster temperance instead fostered intemperance and excess. The solution the United States had devised to address the problem of alcohol abuse had instead made the problem even worse. The statistics of the period are notoriously unreliable, but it is very clear that in many parts of the United States more people were drinking, and people were drinking more.
Though there were significant increases in crimes involved in the production and distribution of illegal alcohol, there was an initial reduction in overall crime, mainly in types of crimes associated with the effects of alcohol consumption such as public drunkenness. Those who continued to use alcohol, tended to turn to organized criminal syndicates. Law enforcement was n't strong enough to stop all liquor traffic; however, they used a "sting '' operations -- "Prohibition agent Elliot Ness famously used wiretapping to discern secret locations of breweries. '' The prisons became crowded which led to fewer arrests for the distribution of alcohol, as well as those arrested being charged with small fines rather than prison time. The murder rate fell for two years, but then rose to record highs because this market became extremely attractive to criminal organizations, a trend that reversed the very year prohibition ended. Overall, crime rose 24 %, including increases in assault and battery, theft, and burglary.
Anti-prohibition groups arose and worked to have the amendment repealed, once it became apparent that Prohibition was an unprecedented catastrophe. The Eighteenth Amendment failed because of its sudden, strict enforcement. It did n't allow the people to have a say or let them gradually ease into the complete ban of alcoholic beverages. Instead, the people rebelled and the introduction of speakeasies and "flappers '' came about.
The Twenty - first Amendment repealed the Eighteenth Amendment on December 5, 1933.
Following ratification in 1919 the effects of the amendment were long lasting, leading to increases in crime in many large cities in the United States, like Chicago, New York, and Los Angeles (1). Along with this came many separate forms of illegal alcohol distribution. Examples of this include speakeasies and bootlegging, as well as illegal distilling operations.
Bootlegging got its start in towns bordering Mexico and Canada, as well as in areas with several ports and harbors, a favorite distribution area for bootleggers being Atlantic City, New Jersey. The alcohol was often supplied from various foreign distributors, like Cuba and the Bahamas, or even Newfoundland and islands under rule by the French.
The government in response employed the Coast Guard to search and detain any ships transporting alcohol into the ports, but with this came several complications such as disputes over where jurisdiction lay on the water. This was what made Atlantic City such a hot spot for smuggling operations, because of a shipping point nearly three miles off shore that U.S. officials could not investigate, further complicating enforcement of the amendment. What made matters even worse for the Coast Guard was that they were not well equipped enough to chase down bootlegging vessels. The Coast Guard however, was able to respond to these issues, and began searching vessels out at sea, instead of when they made port, and upgraded their own vehicles allowing for more efficient and consistent arrests.
But even with the advancements in enforcing the amendment, there were still complications that plagued the government 's efforts. One issue came in the form of forged prescriptions for alcoholic beverages. Many forms of alcohol were being sold over the counter at the time, under the guise of being for medical purposes. But in truth, these beverages had falsified the evidence that they were medically fit to be sold to consumers.
Bootlegging itself was the leading factor that developed the organized crime - rings in big cities, given that controlling and distributing liquor was a very difficult task to achieve. From that arose many profitable gangs that would control every aspect of the distribution process, whether it be concealed brewing and storage, or even operating a speakeasy, or selling in restaurants and nightclubs run by a specific syndicate. With organized crime becoming a rising problem in the United States, control of specific territories was a key objective among gangs, leading to many violent confrontations with murder rates and burglaries heavily increasing between 1920 and 1933. Bootlegging was also found to be a gateway crime for many gangs, who would then expand operations into crimes such as prostitution, gambling rackets, narcotics, loan - sharking, extortion and labor rackets, thus causing problems to persist long after the amendment was repealed.
|
whats the difference between on road and off road diesel | Diesel fuel - Wikipedia
Diesel fuel / ˈdiːzəl / in general is any liquid fuel used in diesel engines, whose fuel ignition takes place, without any spark, as a result of compression of the inlet air mixture and then injection of fuel. (Glow plugs, grid heaters and heater blocks help achieve high temperatures for combustion during engine startup in cold weather.) Diesel engines have found broad use as a result of higher thermodynamic efficiency and thus fuel efficiency. This is particularly noted where diesel engines are run at part - load; as their air supply is not throttled as in a petrol engine, their efficiency still remains very high.
The most common type of diesel fuel is a specific fractional distillate of petroleum fuel oil, but alternatives that are not derived from petroleum, such as biodiesel, biomass to liquid (BTL) or gas to liquid (GTL) diesel, are increasingly being developed and adopted. To distinguish these types, petroleum - derived diesel is increasingly called petrodiesel. Ultra-low - sulfur diesel (ULSD) is a standard for defining diesel fuel with substantially lowered sulfur contents. As of 2016, almost all of the petroleum - based diesel fuel available in UK, Europe and North America is of a ULSD type. In the UK, diesel fuel for on - road use is commonly abbreviated DERV, standing for diesel - engined road vehicle, which carries a tax premium over equivalent fuel for non-road use (see § Taxation). In Australia diesel fuel is also known as distillate, and in Indonesia, it is known as Solar, a trademarked name by the local oil company Pertamina.
Diesel fuel originated from experiments conducted by German scientist and inventor Rudolf Diesel for his compression - ignition engine he invented in 1892. Diesel originally designed his engine to use coal dust as fuel, and experimented with other fuels including vegetable oils, such as peanut oil, which was used to power the engines which he exhibited at the 1900 Paris Exposition and the 1911 World 's Fair in Paris.
Diesel fuel is produced from various sources, the most common being petroleum. Other sources include biomass, animal fat, biogas, natural gas, and coal liquefaction.
Petroleum diesel, also called petrodiesel, or fossil diesel is the most common type of diesel fuel. It is produced from the fractional distillation of crude oil between 200 ° C (392 ° F) and 350 ° C (662 ° F) at atmospheric pressure, resulting in a mixture of carbon chains that typically contain between 8 and 21 carbon atoms per molecule.
Synthetic diesel can be produced from any carbonaceous material, including biomass, biogas, natural gas, coal and many others. The raw material is gasified into synthesis gas, which after purification is converted by the Fischer -- Tropsch process to a synthetic diesel.
The process is typically referred to as biomass - to - liquid (BTL), gas - to - liquid (GTL) or coal - to - liquid (CTL), depending on the raw material used.
Paraffinic synthetic diesel generally has a near - zero content of sulfur and very low aromatics content, reducing unregulated emissions of toxic hydrocarbons, nitrous oxides and particulate matter (PM).
Fatty - acid methyl ester (FAME), more widely known as biodiesel, is obtained from vegetable oil or animal fats (bio lipids) which have been transesterified with methanol. It can be produced from many types of oils, the most common being rapeseed oil (rapeseed methyl ester, RME) in Europe and soybean oil (soy methyl ester, SME) in the US. Methanol can also be replaced with ethanol for the transesterification process, which results in the production of ethyl esters. The transesterification processes use catalysts, such as sodium or potassium hydroxide, to convert vegetable oil and methanol into FAME and the undesirable byproducts glycerine and water, which will need to be removed from the fuel along with methanol traces. FAME can be used pure (B100) in engines where the manufacturer approves such use, but it is more often used as a mix with diesel, BXX where XX is the biodiesel content in percent.
FAME as a fuel is specified in DIN EN 14214 and ASTM D6751.
Fuel equipment manufacturers (FIE) have raised several concerns regarding FAME fuels, identifying FAME as being the cause of the following problems: corrosion of fuel injection components, low - pressure fuel system blockage, increased dilution and polymerization of engine sump oil, pump seizures due to high fuel viscosity at low temperature, increased injection pressure, elastomeric seal failures and fuel injector spray blockage. Pure biodiesel has an energy content about 5 -- 10 % lower than petroleum diesel. The loss in power when using pure biodiesel is 5 -- 7 %.
Unsaturated fatty acids are the source for the lower oxidation stability; they react with oxygen and form peroxides and result in degradation byproducts, which can cause sludge and lacquer in the fuel system.
As FAME contains low levels of sulfur, the emissions of sulfur oxides and sulfates, major components of acid rain, are low. Use of biodiesel also results in reductions of unburned hydrocarbons, carbon monoxide (CO), and particulate matter. CO emissions using biodiesel are substantially reduced, on the order of 50 % compared to most petrodiesel fuels. The exhaust emissions of particulate matter from biodiesel have been found to be 30 % lower than overall particulate matter emissions from petrodiesel. The exhaust emissions of total hydrocarbons (a contributing factor in the localized formation of smog and ozone) are up to 93 % lower for biodiesel than diesel fuel.
Biodiesel also may reduce health risks associated with petroleum diesel. Biodiesel emissions showed decreased levels of polycyclic aromatic hydrocarbon (PAH) and nitrited PAH compounds, which have been identified as potential cancer - causing compounds. In recent testing, PAH compounds were reduced by 75 -- 85 %, except for benz (a) anthracene, which was reduced by roughly 50 %. Targeted nPAH compounds were also reduced dramatically with biodiesel fuel, with 2 - nitrofluorene and 1 - nitropyrene reduced by 90 %, and the rest of the nPAH compounds reduced to only trace levels.
This category of diesel fuels involves converting the triglycerides in vegetable oil and animal fats into alkanes by refining and hydrogenation, such as H - Bio. The produced fuel has many properties that are similar to synthetic diesel, and are free from the many disadvantages of FAME.
Dimethyl ether, DME, is a synthetic, gaseous diesel fuel that results in clean combustion with very little soot and reduced NOx emissions.
In the US, diesel is recommended to be stored in a yellow container to differentiate it from kerosene and gasoline, which are typically kept in blue and red containers, respectively. In the UK, diesel is normally stored in a black container, to differentiate it from unleaded petrol (which is commonly stored in a green container) and leaded petrol (which is stored in a red container).
The principal measure of diesel fuel quality is its cetane number. A cetane number is a measure of the delay of ignition of a diesel fuel. A higher cetane number indicates that the fuel ignites more readily when sprayed into hot compressed air. European (EN 590 standard) road diesel has a minimum cetane number of 51. Fuels with higher cetane numbers, normally "premium '' diesel fuels with additional cleaning agents and some synthetic content, are available in some markets.
As of 2010, the density of petroleum diesel is about 0.832 kg / L (6.943 lb / US gal), about 11.6 % more than ethanol - free petrol (gasoline), which has a density of about 0.745 kg / L (6.217 lb / US gal). About 86.1 % of the fuel mass is carbon, and when burned, it offers a net heating value of 43.1 MJ / kg as opposed to 43.2 MJ / kg for gasoline. However, due to the higher density, diesel offers a higher volumetric energy density at 35.86 MJ / L (128,700 BTU / US gal) vs. 32.18 MJ / L (115,500 BTU / US gal) for gasoline, some 11 % higher, which should be considered when comparing the fuel efficiency by volume. The CO emissions from diesel are 73.25 g / MJ, just slightly lower than for gasoline at 73.38 g / MJ. Diesel is generally simpler to refine from petroleum than gasoline, and contains hydrocarbons having a boiling point in the range of 180 -- 360 ° C (360 -- 680 ° F). The price of diesel traditionally rises during colder months as demand for heating oil rises, which is refined in much the same way. Because of recent changes in fuel quality regulations, additional refining is required to remove sulfur, which contributes to a sometimes higher cost. In many parts of the United States and throughout the United Kingdom and Australia, diesel may be priced higher than petrol. Reasons for higher - priced diesel include the shutdown of some refineries in the Gulf of Mexico, diversion of mass refining capacity to gasoline production, and a recent transfer to ultra-low - sulfur diesel (ULSD), which causes infrastructural complications. In Sweden, a diesel fuel designated as MK - 1 (class 1 environmental diesel) is also being sold; this is a ULSD that also has a lower aromatics content, with a limit of 5 %. This fuel is slightly more expensive to produce than regular ULSD.
Diesel fuel is very similar to heating oil, which is used in central heating. In Europe, the United States, and Canada, taxes on diesel fuel are higher than on heating oil due to the fuel tax, and in those areas, heating oil is marked with fuel dyes and trace chemicals to prevent and detect tax fraud. "Untaxed '' diesel (sometimes called "off - road diesel '' or "red diesel '' due to its red dye) is available in some countries for use primarily in agricultural applications, such as fuel for tractors, recreational and utility vehicles or other noncommercial vehicles that do not use public roads. This fuel may have sulfur levels that exceed the limits for road use in some countries (e.g. US).
This untaxed diesel is dyed red for identification, and using this untaxed diesel fuel for a typically taxed purpose (such as driving use), the user can be fined (e.g. US $10,000 in the US). In the United Kingdom, Belgium and the Netherlands, it is known as red diesel (or gas oil), and is also used in agricultural vehicles, home heating tanks, refrigeration units on vans / trucks which contain perishable items such as food and medicine and for marine craft. Diesel fuel, or marked gas oil is dyed green in the Republic of Ireland and Norway. The term "diesel - engined road vehicle '' (DERV) is used in the UK as a synonym for unmarked road diesel fuel. In India, taxes on diesel fuel are lower than on petrol, as the majority of the transportation for grain and other essential commodities across the country runs on diesel.
Taxes on biodiesel in the US vary between states; some states (Texas, for example) have no tax on biodiesel and a reduced tax on biodiesel blends equivalent to the amount of biodiesel in the blend, so that B20 fuel is taxed 20 % less than pure petrodiesel. Other states, such as North Carolina, tax biodiesel (in any blended configuration) the same as petrodiesel, although they have introduced new incentives to producers and users of all biofuels.
Unlike gasoline and liquefied petroleum gas engines, diesel engines do not use high - voltage spark ignition (spark plugs). An engine running on diesel compresses the air inside the cylinder to high pressures and temperatures (compression ratios from 14: 1 to 18: 1 are common in current diesel engines); the engine generally injects the diesel fuel directly into the cylinder, starting a few degrees before top dead center (TDC) and continuing during the combustion event. The high temperatures inside the cylinder cause the diesel fuel to react with the oxygen in the mix (burn or oxidize), heating and expanding the burning mixture to convert the thermal / pressure difference into mechanical work, i.e., to move the piston. Engines have glow plugs and grid heaters to help start the engine by preheating the cylinders to a minimum operating temperature. Diesel engines are lean burn engines, burning the fuel in more air than is needed for the chemical reaction. They thus use less fuel than rich burn spark ignition engines which use a stoichiometric air - fuel ratio (just enough air to react with the fuel). As Professor Harvey of the University of Toronto notes, "due to the absence of throttling (constant amount of air admitted, per unit fuel, with no user - determined variation), and the high compression ratio and lean fuel mixture, diesel engines are substantially more efficient than spark - ignited engines '', generally; Harvey cites the side - by - side comparisons of Schipper et al. and the estimates of > 20 % lower fuel use and (given differences in energy content between fuel types) > 15 % lower energy use. Gas turbine and some other types of internal combustion engines, and external combustion engine, both can also be designed to take diesel fuel.
The viscosity requirement of diesel fuel is usually specified at 40 ° C. A disadvantage of diesel as a vehicle fuel in cold climates, is that its viscosity increases as the temperature decreases, changing it into a gel (see Compression Ignition -- Gelling) that can not flow in fuel systems. Special low - temperature diesel contains additives to keep it liquid at lower temperatures, but starting a diesel engine in very cold weather may still pose considerable difficulties. Another disadvantage of diesel engines compared to petrol / gasoline engines is the possibility of diesel engine runaway failure. Since diesel engines do not need spark ignition, they can run as long as diesel fuel is supplied. Fuel is typically supplied via a fuel pump. If the pump breaks down in an "open '' position, the supply of fuel will be unrestricted, and the engine will run away and risk terminal failure.
With turbocharged engines, the oil seals on the turbocharger may fail, allowing lubricating oil into the combustion chamber, where it is burned like regular diesel fuel. In vehicles or installations that use diesel engines and also bottled gas, a gas leak into the engine room could also provide fuel for a runaway, via the engine air intake.
The crank case ventilation of modern road - use diesel engines is diverted into the intake manifold, because ventilating the crank case into outside air is inadvisable due to lubricant mist it contains. If the engine 's piston rings malfunction, this will cause excessive pressure in the crank case forcing mist of engine lubricant into the intake manifold. Since most engines use oil which can be burnt in the same fashion as diesel, this will result in diesel engine runaway. To prevent that, more premium crank case ventilation solutions are fitted with a filter to catch out lubricant mist.
Most modern road use diesel engines are provided with an FRP valve in the intake manifold (usually mistaken by some as a petrol engine throttle body). In most basic applications this valve will close a flow of air mixture to the engine when the vehicle is switched off, preventing diesel engine runaway by starving the engine of oxygen; this will also make standard shutdown much smoother by eliminating compression and decompression rattle by making the pistons effectively work in vacuum. In more advanced control systems this FRP valve can be shut by an electronic control unit when it senses runaway scenario.
Diesel fuel is widely used in most types of transportation. Trucks and buses, which were often gasoline - powered in the 1920s through 1950s, are now almost exclusively diesel - powered. The gasoline - powered passenger automobile is the major exception; diesel cars are less numerous worldwide.
Diesel displaced coal and fuel oil for steam - powered vehicles in the latter half of the 20th century, and is now used almost exclusively for the combustion engines of self - powered rail vehicles (locomotives and railcars).
The first diesel - powered flight of a fixed - wing aircraft took place on the evening of 18 September 1928, at the Packard Proving Grounds near Utica, Michigan. With Captain Lionel M. Woolson and Walter Lees at the controls the first "official '' test flight was taken the next morning, flying a Stinson SM1B (X7654), powered by a Packard DR - 980 9 - cylinder diesel radial engine, designed by Woolson. Charles Lindbergh flew the same aircraft and in 1929, it was flown 621 miles (999 km) nonstop from Detroit to Langley Field, near Norfolk, Virginia. In 1931, Walter Lees and Fredrick Brossy set the nonstop flight record flying a Bellanca powered by a Packard diesel for 84 hours and 32 minutes. X7654 is now owned by Greg Herrick and is at the Golden Wings Flying Museum near Minneapolis, Minnesota.
Diesel engines for airships were developed in both Germany and the United Kingdom by Daimler - Benz and Beardmore produced the Daimler - Benz DB 602 and Beardmore Typhoon respectively. The LZ 129 Hindenburg rigid airship was powered by four Daimler - Benz DB 602 16 - cylinder diesel engines, each with 1,200 hp (890 kW) available in bursts and 850 horsepower (630 kW) available for cruising. The Beardmore Typhoon powered the ill - fated R101 airship, built for the Empire airship programme in 1931.
With a production run of at least 900 engines, the most - produced aviation diesel engine in history was probably the Junkers Jumo 205. Similar developments from the Junkers Motorenwerke and licence - built versions of the Jumo 204 and Jumo 205, boosted German diesel aero - engine production to at least 1000 examples, the vast majority of which were liquid - cooled, opposed - piston, two - stroke engines.
In the Soviet Union significant progress towards practical diesel aero - engines was made by the TsIAM (Tsentral'nyy Institut Aviatsionnovo Motorostroyeniya -- central institute of aviation motors) and particularly by A.D. Charomskiy, who nursed the Charomskiy ACh - 30 into production and limited operational use.
Armored fighting vehicles use diesel because of its lower flammability risks and the engines ' higher provision of torque and lower likelihood of stalling.
Diesel - powered cars generally have a better fuel economy than equivalent gasoline engines and produce less greenhouse gas emission. Their greater economy is due to the higher energy per - litre content of diesel fuel and the intrinsic efficiency of the diesel engine. While petrodiesel 's higher density results in higher greenhouse gas emissions per litre compared to gasoline, the 20 -- 40 % better fuel economy achieved by modern diesel - engined automobiles offsets the higher per - litre emissions of greenhouse gases, and a diesel - powered vehicle emits 10 -- 20 percent less greenhouse gas than comparable gasoline vehicles. Biodiesel - powered diesel engines offer substantially improved emission reductions compared to petrodiesel or gasoline - powered engines, while retaining most of the fuel economy advantages over conventional gasoline - powered automobiles. However, the increased compression ratios mean there are increased emissions of oxides of nitrogen (NO) from diesel engines. This is compounded by biological nitrogen in biodiesel to make NO emissions the main drawback of diesel versus gasoline engines.
Today 's tractors and heavy equipment are mostly diesel - powered. Among tractors, only the smaller classes may also offer gasoline engines. The dieselization of tractors and heavy equipment began in Germany before World War II but was unusual in the United States until after that war. During the 1950s and 1960s, it progressed in the US as well.
Tractors and heavy equipment were often multifuel in the 1920s through 1940s, running spark - ignition and low - compression engines. Thus many farm tractors of the era could burn gasoline, alcohol, kerosene, and any light grade of fuel oil such as diesel fuel, heating oil, or tractor vaporising oil, according to whichever was most affordable in any region at any given time. On U.S. farms during this era, the name "distillate '' often referred to any of the aforementioned light fuel oils. The engines did not start as well on distillate, so typically a small auxiliary gasoline tank was used for cold starting, and the fuel valves were adjusted several minutes later, after warm - up, to switch to distillate. Engine accessories such as vaporizers and radiator shrouds were also used, both with the aim of capturing heat, because when such an engine was run on distillate, it ran better when both it and the air it inhaled were warmer rather than at ambient temperature. Dieselization with dedicated diesel engines (high - compression with mechanical fuel injection and compression ignition) replaced such systems and made more efficient use of the diesel fuel being burned.
Poor quality diesel fuel has been used as an extraction agent for liquid -- liquid extraction of palladium from nitric acid mixtures. Such use has been proposed as a means of separating the fission product palladium from PUREX raffinate which comes from used nuclear fuel. In this system of solvent extraction, the hydrocarbons of the diesel act as the diluent while the di alkyl sulfides act as the extractant. This extraction operates by a solvation mechanism. So far, neither a pilot plant nor full scale plant has been constructed to recover palladium, rhodium or ruthenium from nuclear wastes created by the use of nuclear fuel.
Diesel fuel is also often used as the main ingredient in oil - base mud drilling fluid. The advantage of using diesel is its low cost and that it delivers excellent results when drilling a wide variety of difficult strata including shale, salt and gypsum formations. Diesel - oil mud is typically mixed with up to 40 % brine water. Due to health, safety and environmental concerns, Diesel - oil mud is often replaced with vegetable, mineral, or synthetic food - grade oil - base drilling fluids, although diesel - oil mud is still in widespread use in certain regions.
During development of rocket engines in Germany during World War II J - 2 Diesel fuel was used as the fuel component in several engines including the BMW 109 - 718. J - 2 diesel fuel was also used as a fuel for gas turbine engines.
Petroleum - derived diesel is composed of about 75 % saturated hydrocarbons (primarily paraffins including n, iso, and cycloparaffins), and 25 % aromatic hydrocarbons (including naphthalenes and alkylbenzenes). The average chemical formula for common diesel fuel is C H, ranging approximately from C H to C H.
Most diesel fuels freeze at common winter temperatures, while the temperatures greatly vary. Petrodiesel typically freezes around temperatures of − 8.1 ° C (17.5 ° F), whereas biodiesel freezes between temperatures of 2 ° to 15 ° C (35 ° to 60 ° F). The viscosity of diesel noticeably increases as the temperature decreases, changing it into a gel at temperatures of − 19 ° C (− 2.2 ° F) to − 15 ° C (5 ° F), that can not flow in fuel systems. Conventional diesel fuels vaporise at temperatures between 149 ° C and 371 ° C.
Conventional diesel flash points vary between 52 and 96 ° C, which makes it safer than petrol and unsuitable for spark - ignition engines. Unlike petrol, the flash point of a diesel fuel has no relation to its performance in an engine nor to its auto ignition qualities.
Diesel engines as with other forms of combustion, produce the mono - nitrogen oxides NO and NO 2, collectively known as NO. NO reacts with ammonia, moisture, and other compounds to form nitric acid vapor and related particles. Modern diesel engines (Euro 6 & EPA stds) use urea injection to turn NOx into N and water. The urea reacts with nitrogen oxides to form water and nitrogen.
Small particles (particulate matter also known as PM 10 or PM 2.5 depending on size) can penetrate deeply into lung tissue and damage it, causing premature death in extreme cases. Inhalation of such particles may cause or worsen respiratory diseases, such as emphysema or bronchitis, or may also aggravate existing heart disease. Unlike direct injection gasoline / petrol enignes, modern diesel engines are fitted with particulate traps that help to eliminate PM 10 and PM 2.5
High levels of sulfur in diesel are harmful for the environment because they prevent the use of catalytic diesel particulate filters to control diesel particulate emissions, as well as more advanced technologies, such as nitrogen oxide (NO) adsorbers (still under development), to reduce emissions. Moreover, sulfur in the fuel is oxidized during combustion, producing sulfur dioxide and sulfur trioxide, that in presence of water rapidly convert to sulfuric acid, one of the chemical processes that results in acid rain. However, the process for lowering sulfur also reduces the lubricity of the fuel, meaning that additives must be put into the fuel to help lubricate engines. Biodiesel and biodiesel / petrodiesel blends, with their higher lubricity levels, are increasingly being utilized as an alternative. The U.S. annual consumption of diesel fuel in 2006 was about 190 billion litres (42 billion imperial gallons or 50 billion US gallons).
In the past, diesel fuel contained higher quantities of sulfur. European emission standards and preferential taxation have forced oil refineries to dramatically reduce the level of sulfur in diesel fuels. In the European Union, the sulfur content has dramatically reduced during the last 20 years. Automotive diesel fuel is covered in the European Union by standard EN 590. In the 1990s specifications allowed a content of 2000 ppm max of sulphur, reduced to a limit of 350 ppm by the beginning of the 21st century with the introduction of Euro 3 specifications. The limit was lowered with the introduction of Euro 4 by 2006 to 50 ppm (ULSD, Ultra Low Sulfur Diesel). The standard currently in force in Europe for diesel fuel is the Euro 5, with a maximum content of 10 ppm.
In the United States, more stringent emission standards have been adopted with the transition to ULSD starting in 2006, and becoming mandatory on June 1, 2010 (see also diesel exhaust). U.S. diesel fuel typically also has a lower cetane number (a measure of ignition quality) than European diesel, resulting in worse cold weather performance and some increase in emissions.
There has been much discussion and misunderstanding of algae in diesel fuel. Algae need light to live and grow. As there is no sunlight in a closed fuel tank, no algae can survive, but some microbes can survive and feed on the diesel fuel.
These microbes form a colony that lives at the interface of fuel and water. They grow quite fast in warmer temperatures. They can even grow in cold weather when fuel tank heaters are installed. Parts of the colony can break off and clog the fuel lines and fuel filters.
Water in fuel can damage a fuel injection pump; some diesel fuel filters also trap water. Water contamination in diesel fuel can lead to freezing while in the fuel tank. The freezing water that saturates the fuel will sometimes clog the fuel injector pump. Once the water inside the fuel tank has started to freeze, gelling is more likely to occur. When the fuel is gelled it is not effective until the temperature is raised and the fuel returns to a liquid state.
Diesel is less flammable than gasoline / petroleum spirit. However, because it evaporates slowly, spills on a roadway can pose slip hazard to vehicles. After the light fractions have evaporated, a greasy slick is left on the road which reduces tire grip and traction, and can cause vehicles to skid. The loss of traction is similar to that encountered on black ice, resulting in especially dangerous situations for two - wheeled vehicles, such as motorcycles and bicycles, in roundabouts.
|
who do the rockets play in the first round of the playoffs | 2018 NBA playoffs - wikipedia
The 2018 NBA Playoffs is the postseason tournament of the National Basketball Association 's 2017 -- 18 season. The playoffs began on April 14, 2018 and will end in June at the conclusion of the 2018 NBA Finals.
Within each conference, the eight teams with the most wins qualify for the playoffs. The seedings are based on each team 's record.
Each conference 's bracket is fixed; there is no reseeding. All rounds are best - of - seven series; the team that has four wins advances to the next round. All rounds, including the NBA Finals, are in a 2 -- 2 -- 1 -- 1 -- 1 format. Home court advantage in any round does not necessarily belong to the higher - seeded team, but instead to the team with the better regular season record. If two teams with the same record meet in a round, standard tiebreaker rules are used. The rule for determining home court advantage in the NBA Finals is winning percentage, then head to head record, followed by record vs. opposite conference.
On March 7, 2018, the Toronto Raptors became the first team to clinch a playoff spot. On March 30, 2018, the Houston Rockets clinched the Western Conference ending a three - year run by the Golden State Warriors as the top seed. The Rockets clinched the best record in the NBA a day later on March 31, 2018. For the first time since the 1996 -- 97 NBA season, two teams played their last game against each other for the 8th and final spot in the playoffs. The Minnesota Timberwolves defeated the Denver Nuggets 112 -- 106 in overtime to clinch the final playoff seed in the West. This also ended Minnesota 's 13 - year drought without a playoff appearance having last played in 2003 -- 04 season. For the first time since the 2010 -- 11 NBA season, the Los Angeles Clippers would miss the postseason following a loss to the Denver Nuggets on April 7, 2018. This is the first time since 1960 that none of the teams from Los Angeles, New York, and Chicago made the playoffs.
Teams in bold advanced to the next round. The numbers to the left of each team indicate the team 's seeding in its conference, and the numbers to the right indicate the number of games the team won in that round. The division champions are marked by an asterisk. Teams with home court advantage, the higher seeded team, are shown in italics.
* Division winner Bold Series winner Italic Team with home - court advantage
This was the second playoff meeting between these two teams, with the Wizards winning the first meeting.
This was the sixth playoff meeting between these two teams, with the Celtics winning four of the first five meetings.
With the win, the Sixers won their first playoff series since 2012.
This was the second playoff meeting between these two teams, with the Heat winning the first meeting.
LeBron James capped off his heroic Game 5 performance with a game - winning 3 at the buzzer to put the Cavaliers up 3 - 2 in the series. This was the fourth time James has hit a game - winning buzzer beater in the playoffs.
This was the third playoff meeting between these two teams, with each team winning one series.
This was the second playoff meeting between these two teams, with the Rockets winning the first meeting.
This was the fourth playoff meeting between these two teams, with the Warriors winning two of the first three meetings.
The Pelicans completed a sweep of the Trail Blazers for their 1st series win in the playoffs since the 2008 NBA Playoffs against the Dallas Mavericks as the New Orleans Hornets.
This was the first playoff meeting between the Trail Blazers and Pelicans.
The Thunder trailed by as much as 25 points in the 3rd quarter. However, Russell Westbrook and Paul George combined for 47 second - half points to help keep their season alive. The Thunder outscored the Jazz 61 - 28 since the comeback started with 8: 32 left in the 3rd quarter. The 25 - point rally was their largest in franchise history and one of the biggest comebacks for a team facing elimination in playoff history.
This was the fifth playoff meeting between the SuperSonics / Thunder franchise and the Jazz, but the first since the Seattle SuperSonics relocated to Oklahoma City and became the Thunder in 2008. The two teams have split their previous four playoff matchups.
LeBron James capped off a 38 - point performance with a mid-range fadeaway bank shot at the buzzer to lead the Cavs to a commanding 3 -- 0 series lead.
This was the third playoff meeting between these two teams, with Cleveland winning the first two meetings.
This was the 21st playoff meeting between these two teams, with the Celtics winning 12 of the first 20 meetings.
This was the eighth playoff meeting between these two teams, with the Jazz winning five of the first seven meetings.
This was the second meeting in the playoffs between the two teams, with the Warriors winning the first meeting.
The Celtics ' loss at home after leading 3 -- 2 in the series was first time since 2009. Fifth time road team wins game 7 after home team wins first six games. LeBron James becomes the first non-Celtic to advance to 8 consecutive NBA Finals.
This was the eighth playoff meeting between these two teams, with the Celtics winning four of the first seven meetings.
This is the largest margin of victory in franchise history surpassing the 39 - point victory set in 1948, and biggest defeat surpassing a 40 point loss in 2005.
Golden State rallied from a 17 - point first quarter deficit by outscoring Houston 64 -- 25 in the second half to force a Game 7. The Rockets ' 25 second - half points tied a franchise record low for scoring in any half in the postseason.
This was the third playoff meeting between these two teams, with the Warriors winning the first two meetings.
This will be the fourth meeting in the NBA Finals between these two teams, with the Warriors winning two of the first three meetings.
ESPN, TNT, ABC, and NBA TV will broadcast the playoffs nationally in the United States. In the first round, regional sports networks affiliated with the teams can also broadcast the games, except for games televised on ABC. Throughout the first two rounds, TNT televised games Sunday through Thursday, ESPN televised games Thursday and Friday, and ABC televised selected games on Saturday and Sunday, usually in the afternoon. NBA TV has aired select weekday games in the first round. ESPN will be televising the Eastern Conference Finals, while the Western Conference Finals will be televised by TNT. ABC will have exclusive television rights to the NBA Finals for the 16th consecutive year.
|
paul blart mall cop 2 what happened to amy | Paul Blart: Mall Cop 2 - Wikipedia
Paul Blart: Mall Cop 2 is a 2015 American action comedy film directed by Andy Fickman and written by Kevin James and Nick Bakay. It is the sequel to 2009 's Paul Blart: Mall Cop, and stars James as the eponymous mall cop, Paul Blart, along with Neal McDonough, David Henrie, and Daniella Alonso.
Filming began in April 2014 at the Wynn Las Vegas casino resort. It was released the following year on April 17, 2015. Paul Blart: Mall Cop 2 was the first film shot on the Steve Wynn property. It was also the first film to receive Nevada 's film tax credit enacted in 2013. The film grossed $107 million worldwide at the box office and has a 5 % approval rating at Rotten Tomatoes.
Paul Blart (Kevin James) narrates his several misfortunes and his hard recovery. His wife Amy (Jayma Mays) divorced him six days into their marriage and to feel better, Paul takes pride in patrolling the West Orange Pavilion Mall. Two years later, his mother Margaret (Shirley Knight) was killed after being hit by a milk truck. Four years after that, as Paul narrates "he had officially peaked '', he receives an invitation to a security officers ' convention in Las Vegas and begins to believe his luck is about to change. His daughter Maya Blart (Raini Rodriguez) discovers that she was accepted into UCLA and plans to move across the country to Los Angeles, but in light of her father 's invitation, she decides to withhold the information for now.
After arriving in Las Vegas, Paul and his daughter meet the general manager of Wynn Hotel, a pretty young woman named Divina Martinez (Daniella Alonso), to whom Paul is instantly attracted. He later learns that she 's dating the hotel 's head of security, Eduardo Furtillo (Eduardo Verástegui). Meanwhile, Maya and the hotel 's valet, Lane (David Henrie) become instantly attracted to each other. A security guard from the Mall of America attending the convention, Donna Ericone (Loni Love), is aware of Paul 's earlier heroics in the West Orange Pavilion Mall incident and believes Paul will be the likely keynote speaker at the event. However, Paul discovers that another security guard, Nick Panero (Nicholas Turturro), is giving the speech.
In the midst of the convention, a criminal named Vincent Sofel (Neal McDonough) and a gang of accomplices disguised as hotel employees are secretly plotting to steal priceless works of art from the hotel and replace them with replicas, then sell the real ones at auction. In the meantime, Paul has become overprotective of Maya after discovering her flirting with Lane and spies on their conversations. He is later mocked by Eduardo for his lack of professionalism in an event where hotel security was notified when Maya turns up missing. In an ensuing argument with her father, Maya boldly claims she 's attending UCLA despite Paul 's wishes that she remain close to home at a junior college.
At the convention, Paul, Donna, and three other security guards, Saul Gundermutt (Gary Valentine), Khan Mubi (Shelly Desai), and Gino Chizzeti (Vic Dibitetto) check out the non-lethal security equipment on display. Later, Paul finds Panero drunk hitting on a woman at the bar. Paul attempts to defuse the situation and Panero passes out, giving Paul a chance to be the event 's speaker. He contacts Maya asking her to attend, but he learns that she 's at a party with Lane. As Paul prepares his speech, Vincent and his cohorts put their plan into motion. Maya absentmindedly walks into the midst of the heist and is taken hostage. Lane is kidnapped as well while searching for her. Paul gives a rousing speech that moves everyone at the convention, as well as Divina, who inexplicably finds herself becoming more attracted to Paul with each passing moment. Following the speech, Paul learns about Maya and Lane 's situation and rushes to help but passes out due to his hypoglycemic condition that has plagued him for years.
After recovering, Paul is able to take down several of Vincent 's thugs and gathers intel on the group 's intentions. Using non-lethal equipment from the convention, he is able to take out more of Vincent 's crew. Meanwhile, Maya and Lane overhear Vincent adamantly refusing an oatmeal cookie due to a severe oatmeal allergy. Working with a team -- Donna, Saul, Khan, and Gino -- Paul is able to clumsily dismantle Vincent 's operation, with Maya severely incapacitating Vincent by rubbing oatmeal - infused concealer on his face. Afterward, Paul convinces Divina that her attraction for him is misplaced, and Eduardo is with whom she should really be. Paul also accepts Maya going to UCLA, funding her tuition with the reward money he obtained from Steve Wynn for stopping Vincent. After dropping off Maya at UCLA, Paul falls in love with a passing Mounted Police Officer who reciprocates his advances.
In January 2009, Sony expressed an interest in making a sequel to Paul Blart: Mall Cop. It was revealed on January 7, 2014 that Andy Fickman was in talks to direct the film while Kevin James, who also co-wrote the script with Nick Bakay, would be back to star as Blart. James produced the film along with Todd Garner and Happy Madison 's Adam Sandler. The cast includes David Henrie, Raini Rodriguez, Eduardo Verástegui, Nicholas Turturro, Gary Valentine, Neal McDonough, Daniella Alonso, and D.B. Woodside, starring alongside James.
On March 14, 2014, the Nevada Film Office announced that Sony Pictures had been awarded the first certificate of eligibility for a new tax credit enacted in 2013, in regard to the filming of Paul Blart: Mall Cop 2. Nevada Film Office Director, Eric Preiss, indicated that the production would get $4.3 million in tax credits based on the proposal in their application. On April 2, 2014, Columbia Pictures announced that the film would be released on April 17, 2015.
In an October 2012 interview, James said that he liked the idea of filming the sequel at the Mall of America. Principal photography commenced on April 21, 2014, at Wynn Las Vegas, and ended on June 26, 2014. It is the first time that Steve Wynn has allowed a commercial film to be shot at this property.
Paul Blart: Mall Cop 2 was released by Columbia Pictures in the United States on April 17, 2015. Paul Blart: Mall Cop 2 grossed $71 million in North America and $36.2 million in other territories for a worldwide total of $107.3 million, against a budget of $30 million. In its opening weekend, the film grossed $23.8 million, finishing second at the box office behind Furious 7 ($29.2 million).
The film was released on DVD and Blu - ray on July 14, 2015.
On Rotten Tomatoes, the film received a rating of 5 % based on 57 reviews and an average score of 2.5 / 10. The site 's consensus reads, "Bathed in flop sweat and bereft of purpose, Paul Blart: Mall Cop 2 strings together fat - shaming humor and Segway sight gags with uniformly unfunny results. '' On Metacritic, the film has a score of 13 out of 100, based on 16 critics, indicating "overwhelming dislike ''. Audiences polled by CinemaScore gave the film an average grade of "B - '' on an A+ to F scale.
Sara Stewart of the New York Post gave the film one out of four stars, saying "This wisp of a plot is just an excuse for James to do his one trick over and over: Bluster, then screw up humiliatingly. Is it never funny? No, it 's not never funny. It 's just not funny nearly often enough. '' Frank Scheck of The Hollywood Reporter gave the film a negative review, saying "James tries hard, very hard, to inject the proceedings with slapstick humor, propelling his large body through endless physical contortions in a fruitless effort for laughs. '' Justin Chang of Variety gave the film a negative review, saying "Kevin James keeps falling on his face and colliding with heavy objects, this time in Vegas, in this tacky, numbingly inane sequel. ''
Christy Lemire of RogerEbert.com gave the film zero stars, saying "Truly, there is not a single redeeming moment in director Andy Fickman 's film. A general flatness and lethargy permeate these reheated proceedings. '' Kevin P. Sullivan of Entertainment Weekly gave the film a D, saying "Far from the worst movie that you 'll ever see, but you might leave wondering why you, the people on the screen, or anyone else in the theater even bothered. '' Peter Howell of the Toronto Star gave the film a half a star out of four, saying "Caddyshack 2. Exorcist 2. Speed 2. To this small sample of the ever - expanding list of wretched movie sequels, add Paul Blart: Mall Cop 2, a gobsmackingly witless excuse for entertainment. '' Andy Webster of The New York Times gave the film a negative review, saying "You wo n't find much offensive in Kevin James 's slick, innocuous vehicle Paul Blart: Mall Cop 2. You wo n't find much prompting an emotional reaction in general, so familiar are the jokes and situations. If Mr. James 's character thinks of safety first, so does this movie, to its extreme detriment. ''
|
where did college inn broth get its name | Broth - wikipedia
Broth is a savory liquid made of water in which bones, meat, fish, or vegetables have been simmered. It can be eaten alone, but is most commonly used to prepare other dishes such as soups, gravies, and sauces.
Commercially prepared liquid broths are available, typically for chicken broth, beef broth, and vegetable broth. In North America dehydrated meat stock, in the form of tablets, is called a bouillon cube. Industrially produced bouillon cubes were commercialized by Maggi in 1908 and by Oxo in 1910. Using commercially prepared broths allows cooks to save time in the kitchen.
Many cooks and food writers use the terms broth and stock interchangeably, and even when distinctions are made, they often vary from person to person. In 1974, James Beard wrote more emphatically that "they 're all the same thing ''.
However, a traditional distinction between stock and broth is that stocks are made primarily from animal bones, as opposed to meat, and therefore contain more gelatin, giving them a thicker texture. Another distinction that is sometimes made is that stock is cooked longer than broth and therefore has a more intense flavor. A third possible distinction is that stock is left unseasoned for use in other recipes, while broth is salted and otherwise seasoned and can be eaten alone.
Bouillon is the French word for "broth '', and is usually used as a synonym for it.
Broth has been made for many years using the animal bones which, traditionally, are boiled in a cooking pot for long periods to extract the flavor and nutrients. The bones may or may not have meat still on them.
Egg whites may be added during simmering when it is necessary to clarify (i.e., purify, or refine a broth for a cleaner presentation). The egg whites will coagulate, trapping sediment and turbidity into an easily strained mass. Not allowing the original preparation to boil will increase the clarity.
Roasted bones will add a rich flavor to the broth but also a dark color.
In Britain, a broth is defined as a soup in which there are solid pieces of meat or fish, along with some vegetables. A broth is usually made with a stock or plain water as its base, with meat or fish added while being brought to a boil, and vegetables added later. Being a thin and watery soup, broth is frequently made more substantial by adding rice, barley or pulses.
In East Asia (particularly Japan), a form of kelp called kombu is often used as the basis for broths (called dashi in Japanese).
In the Maldives the tuna broth known as garudiya is a basic food item, but it is not eaten as a soup in the general sense of the term.
By 2013, bone broth had become a popular health food trend, due to the resurgence in popularity of dietary fat over sugar, and interest in "functional foods '' to which "culinary medicinals '' such as turmeric and ginger could be added. Bone broth bars, bone broth home delivery services, and bone broth carts and freezer packs grew in popularity in the United States, the fad heightened by the 2014 book Nourishing Broth, in which authors Sally Fallon Morell and Kaayla T. Daniel state that the broth 's nutrient density has a variety of health effects, including boosting the immune system; improving joints, skin and hair due to collagen content; and promoting healthy teeth and bones due to calcium, magnesium and phosphorus levels.
However, there is no scientific evidence to support many of the claims made for bone broth. For example, while bone broths do contain collagen, they do not relieve joint pain or improve skin, because dietary collagen is broken down into amino acids, which become building blocks for body tissues, and is not transported directly to joints or skin in the form in which it is ingested. In addition, the fact that the broth is derived from bone does not mean that therefore it will build bone or prevent osteoporosis, as the bones release very little calcium into the broth when prepared. Bone broths also do not improve digestion, as there is little evidence that the gelatin that they contain functions as a digestive aid. A few small studies have found some possible benefit for chicken broth, such as the clearing of nasal passages. Chicken soup may also reduce inflammation, however this effect has not been confirmed in controlled studies of adults.
|
when do german shorthaired pointers get their ticking | German Shorthaired Pointer - wikipedia
The German Shorthaired Pointer (GSP) is a medium to large sized breed of dog developed in the 19th century in Germany for hunting. A versatile hunting breed, being an all - purpose gun breed of dog suitable for both land and water, it is streamlined yet powerful with strong legs that make it able to move rapidly and turn quickly. It has moderately long floppy ears set high on the head. Its muzzle is long, broad, and strong, allowing it to retrieve even heavy game. The dog 's profile should be straight or strongly Roman nosed; any dished appearance to the profile is incorrect. The eyes are generally brown, with darker eyes being desirable; yellow or "bird of prey '' eyes are a fault. The tail is commonly docked, although this is now prohibited in some countries. The correct location for docking for a GSP is after the caudal vertebrae start to curl, leaving enough tail to let the dog communicate through tail wagging and movement. The docked tail should not be too long or too short but should balance the appearance of the head and body. The GSP tail is carried at a jaunty angle, not curled under. When the GSP is in classic point stance, the tail should be held straight out from the body, forming a line with the pointing head and body. Like all German pointers, GSPs have webbed feet. They are known for going after water fowl in the water.
The German Shorthaired Pointer is a member of the Sporting Group.
The German Shorthaired Pointer 's coat is short and flat with a dense undercoat protected by stiff guard hairs making the coat water resistant and allowing the dog to stay warm in cold weather. This allows the German Shorthaired Pointer to be an agile hunter with high performance in both field and water. The color can be a dark brown, referred to as "liver '' (incorrectly as "chocolate '' or "chestnut ''), black (although any area of black is cause for disqualification in American Kennel Club - sanctioned shows), black roan, white, liver roan, liver and white, or black and white. The American Kennel Club recognizes only a solid liver or liver and white coat. Commonly, the head is a solid or nearly solid color, and the body is speckled or "ticked '' with liver and white, sometimes with large patches of solid color called "saddles. '' Roan coats are also common, with or without patching. Solid liver and solid black coats also occur, often with a small blaze of ticking or white on the chest. While the German standard permits a slight sandy colouring ("Gelber Brand '') at the extremities, this colouring is rare, and a dog displaying any yellow colouring is disqualified in AKC and CKC shows. The colouring of the GSP provides camouflage in the winter seasons. The coat can be very glossy if washed.
The temperament of dogs can be affected by different factors, including heredity, training, and socialization. The German Shorthaired Pointer was developed to be a dog suited for family life, as well as a versatile hunter. Therefore its temperament is that of an intelligent, bold, boisterous, eccentric, and characteristically affectionate dog that is cooperative and easily trained. This breed is smart, friendly, willing, and enthusiastic. The GSP is usually good with children, although care should be taken because the breed can be boisterous especially when young. These dogs love interaction with humans and are suitable pets for active families who will give them an outlet for their considerable energy; they must be avidly run multiple times a week. The breed should be socialized, which includes exposure to different people, sights, sounds, and experiences when they are young. This early socializing will help to ensure that your German Shorthaired Pointer puppy will grow up to be a well - rounded dog. An important part of training would be to enroll your dog into a training class. Most German Shorthaired Pointers make excellent watchdogs. The breed generally gets along well with other dogs, though females appear to be much more dominant during interbreed interaction. A strong hunting instinct is correct for the breed, which is not always good for other small pets such as cats or rabbits.
The German Shorthaired Pointer needs plenty of vigorous activity and thrives with lots of exercise and running. This need for exercise (preferably off lead) coupled with the breed 's natural instinct to hunt, means that training is an absolute necessity. The GSP 's distinctly independent character means that any unused energy will likely result in the dog amusing itself, most probably in an undesirable manner.
Failure by the owner to give this active and intelligent dog sufficient exercise and / or proper training can produce a German shorthaired pointer that appears hyperactive or that has destructive tendencies. Thus the breed is not a suitable pet for an inactive home or for inexperienced dog owners. Although these dogs form very strong attachments with their owners, a bored GSP that receives insufficient exercise may feel compelled to exercise himself. These dogs are athletic and can escape from four - to six - foot enclosures with little difficulty. Regular hunting, running, carting, bikejoring, skijoring, mushing, dog scootering or other vigorous activity can alleviate this desire to escape. The natural instinct to hunt may result in the dog hunting alone and sometimes bringing home occasional dead trophies, such as cats, rats, pigeons and other urban animals. In addition to exercise, especially formal hunting, the GSP needs to be taught to distinguish legitimate prey and off limits animals.
Like the other German pointers (the German wirehaired pointer and the less well known German longhaired pointer), the GSP can perform virtually all gun dog roles. It is pointer and retriever, an upland bird dog and water dog. The GSP can be used for hunting larger and more dangerous game. It is an excellent swimmer but also works well in rough terrain. It is tenacious, tireless, hardy, and reliable. German Shorthaired Pointers are proficient with many different types of game and sport, including trailing, retrieving, and pointing pheasant, quail, grouse, waterfowl, raccoons, possum, and even deer.
Most German shorthaired pointers are tough, healthy dogs, but the breed can be subject to a number of hereditary disorders due to their breeding. Some of these health disorders include, hypothyroidism, hip dysplasia, osteochondrosis dissecans (OCD), pannus, progressive retinal atrophy (PRA), epilepsy, skin disorders and cancerous lesions in the mouth, on the skin and other areas of the body. As with other breeds, un-spayed female GSPs are prone to breast cancer. This risk is reduced if they are spayed.
Many factors, like genetics, environment, and diet can all contribute to hip dysplasia, which is a deformity of the hip joint. Not all German shorthaired pointers will get this if they have a healthy life style. Though in severe cases, surgical correction may be required. Like many other deep - chested dogs, German shorthaired pointers are highly prone to gastric dilatation volvulus (GDV), also known as bloat. This is a life - threatening condition, requiring immediate veterinary treatment. GDV occurs especially if the dog is fed one large meal a day, eats rapidly, drinks large amounts of water after eating, or exercises vigorously after eating. In GDV, the stomach distends with gas or air and then twists (torsion), so that the dog is unable to rid the excess air in stomach through burping or vomiting. Also, the normal return of blood to the heart is impeded, causing a drop in blood pressure and the dog will go into shock. Without immediate medical attention, the dog may die. Some symptoms of GDV are: distended abdomen, excessive salivation, retching without throwing up, restlessness, depression, lethargy, and weakness. Precautions against GVD include: refraining from feeding immediately before or after exercise, feeding several smaller meals throughout the day instead of a single large meal, and avoiding the consumption of large amounts of water with dry food.
As with any other hunting dog, contact with game can cause the spread of fungi and bacteria that can easily colonise in the gums or cause infections on open wounds and small cuts from scratching against plants and bushes during a regular hunting session.
German Shorthaired Pointers along with other sporting dogs requires a lot of exercise and space to run. GSPs have a lot of energy, they are one of the most energetic breeds. Therefore if not given the right amount of attention, they can become bored and destructive. GSPs do not do well left alone all day or if relegated to a kennel without plenty of human interaction.
GSPs are a very clean breed. The short GSP coat needs very little grooming, just occasional brushing. They typically shed constantly. GSPs should be bathed only when needed.
Like all dogs with flop ears, GSP can be prone to ear infections and their ears require regular checking and cleaning.
The GSP has a median lifespan of 9 years in a Danish survey and 12 years in a UK survey. In the UK survey about 1 in 8 lived to > 15 years with the longest lived dog living to 17 years.
As the GSP is a medium / large, active breed, the dogs can require considerable food. Older or less active GSPs can also become obese if fed more than suitable for the individual 's activity levels. A healthy weight should permit the last two ribs to be felt under the coat and the dog should have a distinct waist or "tuck - up ''.
Due to the short GSP coat, body heat management is not generally a problem. However, the GSP 's high levels of activity require the breed to drink considerable amounts of water to prevent dehydration. Early symptoms of dehydration show itself as thick saliva and urine with an excessively strong and distinct smell.
German Shorthaired pointers are still currently used as versatile hunting and gun dogs.
The precise origin of the German Shorthaired Pointer is unclear. According to the American Kennel Club, it is likely that the GSP is descended from a breed known as the German Bird Dog, which itself is related to the Old Spanish Pointer introduced to Germany in the 17th century. It is also likely that various German hound and tracking dogs, as well as the English Pointer and the Arkwright Pointer also contributed to the development of the breed. However, as the first studbook was not created until 1870, it is impossible to identify all of the dogs that went into creating this breed. The breed was officially recognized by the American Kennel Club in 1930.
Thomas Mann 's great love for his German Shorthaired is told in the book Bashan and I. Robert B. Parker 's most popular mystery series features a Boston detective known only as Spenser who has had a series of three solid - liver German shorthairs, all named Pearl: one who stood with him during a bear charge in his rural youth; one given to his girlfriend by her ex-husband; and the third Pearl, to keep company with Spenser and his girlfriend in their late middle age. Author Parker appears on many of the Spenser dust jackets with a solid - liver GSP male identical to the three incarnations of Pearl in the series.
Rick Bass 's ruminations on living and hunting with a German shorthaired pointer in Montana can be found in the book Colter: The True Story of the Best Dog I Ever Had.
Sportswriter Mel Ellis ' memoir Run, Rainey, Run, explores the extraordinary relationship he had with an extremely intelligent and versatile hunting German shorthaired pointer.
The 1978 film "Days of Heaven, '' written and directed by Terrence Malick, features a brief scene of dogs hunting the prairie. The GSP shown is Jocko von Stolzhafen, twice GSP National Champion (Field) and perhaps the best GSP of his era. A year or so later Jocko vanished while running at a training camp, presumably stolen.
The logo of the Westminster Kennel Club is a Pointer, not a German shorthaired pointer, though frequently mistaken for the latter.
|
where did the name bb gun come from | BB gun - wikipedia
BB guns are a type of air gun designed to fire spherical metal projectiles similar to shot pellets of approximately the same size. Modern BB guns usually have a barrel with a bore caliber of 4.5 mm (0.177 in) and are available in many varieties. These guns usually use steel BB shots, plated either with zinc or copper to resist corrosion, and measure 4.3 to 4.4 mm (0.171 to 0.173 in) in diameter and 0.33 to 0.35 g (5.1 to 5.4 gr) in weight. Some manufacturers still make lead balls around 0.48 to 0.50 g (7.4 to 7.7 gr) in weight and slightly larger in diameter, which are generally intended for use in rifled barrels.
The term "BB gun '' is often incorrectly used to describe a pellet gun, which fires non-spherical projectiles. Although in many cases a steel BB can be fired in a pellet gun, pellets usually can not be fired in a gun specifically designed for BBs. Similarly, the term is also often used incorrectly to address airsoft guns, which shoot plastic balls that are larger but much less dense.
The term BB originated from the nomenclature of the size of steel shots used in a shotgun. Size "BB '' shots were normally 0.180 in (4.6 mm), but tended to vary considerably in size due to the loose tolerances in shotgun shells. The highest size shotgun pellet commonly used was named OO or double ought and was used for hunting deer and thus called buckshot, while the smaller BB - sized shot was typically used to shoot small / medium - sized birds and therefore was a birdshot.
Around 1900, Daisy Manufacturing Company (formerly Plymouth Air Rifle Company), one of the earliest makers of birdshot - caliber air rifles, changed their BB - size bore diameter to 0.175 in (4.4 mm), and began to market precision - made lead shot specifically for their BB guns. They called these "round shots '', but the BB name was already well established, and most users continued calling their guns BB guns, and the projectiles as BB shots or just BBs.
Subsequently, the term BB became generic, referring to any small spherical projectiles of various calibers and materials. This includes bearing balls often utilized by anti-personnel mines,. 177 caliber lead / steel shots used by air guns, plastic round balls (such as the pellets used by airsoft guns), small marbles and many others. It has become ubiquitous to refer to any steel ball, such as a BB, as a ball bearing. However, BBs should not be confused with a ball bearing, which is a mechanical component using bearing balls.
BB guns can use any of the operating mechanisms used for air guns. However, due to the inherent limited accuracy and short range of the BB, only the simpler and less expensive mechanisms are generally used for guns designed to fire only BBs.
Because the strength of the steel BB does not allow it to be swaged with the low energies used to accelerate it through the barrel, BBs are slightly smaller (4.3 to 4.4 mm (0.171 to 0.173 in)) than the internal diameter of the barrel (4.5 mm (0.177 in)). This limits accuracy because little spin is imparted on the BB. It also limits range, because some of the compressed air used to accelerate it is lost around the BB. Since a BB will easily roll unhindered down the barrel, it is common to find guns that use a magnet in the loading mechanism to hold the BB at the rear of the barrel until it is fired.
The traditional, and still most common powerplant for BB guns is the spring piston, usually patterned after a lever - action rifle or a pump - action shotgun. The lever - action rifle was the first type of BB gun, and still dominates the inexpensive youth BB gun market. The Daisy Model 25, modeled after a pump - action shotgun with a trombone pump - action mechanism, dominated the low - price, higher - performance market for over 50 years. Lever - action models generally have very low velocities, around 84 m / s (275 ft / s), a result of the weak springs used to keep cocking efforts low for use by youths. The Daisy Model 25 typically achieved the highest velocities of its day, ranging from 114 to 145 m / s (375 to 475 ft / s).
Multiple - pump pneumatic guns are also common. Many pneumatic pellet guns provide the ability to use BBs as a cheaper alternative to lead shot. Some of these guns have rifled barrels, but the slightly undersized BBs do n't swage in the barrel, so the rifling does not impart a significant spin. These are the types of guns that will benefit most from using precision lead BB shot. The pneumatic BB gun can attain much higher velocities than the traditional spring piston types.
The last common type of power for BB guns is pre-compressed gas, most commonly the 12 gram CO powerlet. The powerlet is a disposable metal flask containing 12 grams of compressed carbon dioxide, which expands to propel the BB. These are primarily used in pistol BB guns, and unlike spring - piston or pneumatic types, these are capable of rapid fire. A typical CO BB pistol uses a spring - loaded magazine to feed BBs, and a double - action trigger mechanism to chamber a BB and cock the hammer. However some guns (either to stay true to the original gun or to make the trigger pull easier) do have a single - action trigger. Either type of gun may also have blowback action, where CO will push the slide back in addition to firing a BB. When firing, the hammer strikes a valve hooked to the CO source, which releases a measured amount of CO gas to fire the BB, this also gives it a realistic feature. Velocities of CO powered BB pistols are moderate, and drop off as the pressure in the CO source drops. Many CO BB guns are patterned after popular firearms such as the Colt M1911, and can be used for training as well as recreation.
Some gas - powered BB guns use a larger source of gas, and provide machine gun - like fire. These types, most notably the Shooting Star Tommy Gun (originally known as the Feltman) are commonly found at carnivals. The MacGlashan BB Gun, was used to train antiaircraft gunners in the United States Army Air Corps and United States Navy during World War II. A popular commercial model was the Larc M - 19, which used 1 pound (454 g) canisters of Freon - 12 refrigerant. These types have very simple operating mechanisms, based on a venturi pump. The gas is released in a constant stream, and this is used to suck the BBs up into the barrel at rates as high as 3600 rounds per minute.
It is possible to shoot competitively with a BB gun. The National Rifle Association youth shooting program has classifications for smoothbore BB guns, open to ages 14 -- 18, and these classes are popular with youth groups such as Boy Scouts of America and 4H.
Most BB - firing airguns can shoot faster than 60 m / s (200 ft / s). Some airguns have the ability to fire considerably faster, even beyond 170 m / s (560 ft / s). Although claims are often exaggerated, a few airguns can actually fire a standard 0.177 caliber lead pellet faster than 320 m / s (1,000 ft / s), but these are generally not BB - firing guns.
A BB with a velocity of 45 m / s (150 ft / s) has skin piercing capability, and a velocity reaching 60 m / s (200 ft / s) can fracture bone. The potential exists for killing someone; this potential increases with velocity, but also rapidly decreases with distance. The effective penetrating range of a BB gun with a muzzle velocity of 120 to 180 m / s (390 to 590 ft / s) is approximately 18 m (60 ft). A person wearing jeans at this distance would not sustain serious injury. However, even at this distance a BB still might penetrate bare skin, and even if not, could leave a severe and painful bruise. The maximum range of a BB gun in the 120 to 180 m / s (390 to 590 ft / s) range is 220 to 330 m (240 to 360 yd), provided the muzzle is elevated to the optimum angle.
Steel BBs are also notably prone to ricochet off hard surfaces such as brick, concrete, metal, or wood end grain. Eye protection is essential when shooting BBs at these materials. More so than when shooting lead pellets, since a BB bouncing off a hard surface can retain a large portion of its initial energy (pellets usually flatten and absorb energy), and could easily cause serious eye damage.
The U.S. Army trained recruits in Quick Kill techniques using Daisy Model 99 BB guns to improve soldiers using their weapons in the Vietnam War from 1967 - 1973. The technique was developed for the Army by Bobby Lamar "Lucky '' McDaniel and Mike Jennings. The sights were removed from the BB guns for this training. The name was later changed to "Quick Fire '' training.
BB guns are often regulated as a type of air gun. Air gun laws vary widely by jurisdiction.
One of the most famous BB guns is the Red Ryder BB Gun by Daisy Outdoor Products, modeled after the Winchester lever - action rifle. First introduced in 1938, it became an iconic American toy, and is still in production today (companies are able to still manufacture them because product testers know that they are toys, and if they are convicted as weapons, toy testers have the right to abort exportation). It was prominently featured in A Christmas Story, in which Ralphie Parker requests one for Christmas, but is repeatedly rebuffed with the warning "You 'll shoot your eye out ''. The movie 's fictional BB gun, described as the "Red Ryder carbine - action, two hundred shot Range Model air rifle with a compass in the stock and this thing which tells time '', was not a real gun. The Red Ryder featured in the movie was specially made to match author Jean Shepherd 's story (which may be artistic license, but was the configuration Shepherd claimed to remember). The guns and a stand - up advertisement featuring the Red Ryder character appeared in a Higbee 's store window in the film, along with dolls, a train, and Radio Flyer wagons.
An episode of My Name Is Earl called "BB '' focused on the title character having accidentally shot a girl with a BB gun when he was younger, inadvertently causing a falling - out between her and her father.
|
when does season 6 of once upon a time come on | Once Upon a Time (season 6) - wikipedia
The sixth season of the American ABC fantasy - drama Once Upon a Time was ordered on March 3, 2016. It debuted on September 25, 2016, and concluded on May 14, 2017. In January 2017, it was stated that the sixth season would end the main storyline, and for a seventh season, the series would be softly rebooted with a new storyline.
Existing fictional characters introduced to the series during the season include Aladdin, Princess Jasmine, the Count of Monte Cristo, Captain Nemo, Lady Tremaine, Beowulf, Tiger Lily and the Tin Man. Original new characters include Gideon, the Black Fairy, Mary Lydgate, and Robert. The show also reintroduced Jafar and Dr. Arthur Lydgate, who previously appeared in Once Upon a Time in Wonderland.
This season also marks the final appearance of Emma Swan (Jennifer Morrison) as a series regular. Morrison announced she would be departing the series after the sixth - season finale, but if the series receives a seventh season renewal she has agreed to appear in at least one episode. It is currently unclear to what capacity. After serving as a series regular for two seasons, Rebecca Mader also announced that season six would be her last on the show. Shortly after, Adam Horowitz and Edward Kitsis announced that original cast members Ginnifer Goodwin, Josh Dallas, and Jared Gilmore, as well as Emilie de Ravin who joined the main cast in season two, would also exit the show at the end of the season.
Just when it looks like Storybrooke can enjoy some peace, once more it is threatened by dark forces. The malevolent Mr. Hyde, now separated from Dr. Jekyll, has arrived and brought his fellow inhabitants from the Land of Untold Stories. To make matters worse, Regina 's dark half - the Evil Queen - continues to exist despite her heart being crushed; unburdened by a conscience, the Evil Queen has declared war on the heroes and separates Snow and David by placing a sleeping curse on their shared heart. Meanwhile, Aladdin 's past as the previous Savior becomes a new factor in Emma 's role as the current Savior, and is about to be pushed to the limits, which could lead her to a future that has no happy ending for her, while Gold must deal with trying to win Belle 's heart again so they can be a family for their future child. Later on, the Black Fairy abducts Gideon, the son of Gold and Belle, which further complicates things for Storybrooke 's residents. As Emma, Regina, and an alternate universe version of Robin Hood return to Storybrooke, Gideon arrives now a grown man, and is revealed to be the one fated to kill Emma. The events lead to the Black Fairy, the creator of the Dark Curse, who is also controlling Gideon with his heart, crossing over into Storybrooke, as the ongoing war between light and darkness ultimately leads to the Final Battle that was prophesied before the casting of the original Curse. However, after the events of the Final Battle close the last chapter, which finally bought "Happy Beginnings '' for everyone involved, a new one has begun for a grown - up Henry, when his daughter Lucy arrives in Seattle from the Magical Forest, for a new adventure.
Executive producers Adam Horowitz and Edward Kitsis announced that they were ending the half - season arc structure that was seen in seasons three through five, with Horowitz saying, "We 're also planning a 22 - episode story as opposed to breaking it up into two halves this year. It has been really exciting and fun. '' Kitsis added: "We are changing around what we 're doing this year and going back to that season 1 mentality of small town stories and smaller arcs. '' This season will focus on Storybrooke as the main setting, but will also show new realms and there will be the exploration of the Savior 's mythology. Kitsis and Horowitz hired two directors from their Freeform series Dead of Summer, Norman Buckley, Mairzee Almas, and Michael Schultz helm episodes of the season. Jennifer Lynch directed the eighth episode of the season, "I 'll Be Your Mirror ''.
In January 2017, ABC president Channing Dungey had been the first to suggest that the current narrative of the show would end with season six, regardless of whether or not there is a season seven. Shortly after this news, Jennifer Morrison revealed that the contracts for the original main cast members were expiring in April, expressing uncertainty about the future of the show and her participation in it beyond the current season. Robert Carlyle also expressed that he 'd have to make a decision about his future on the show by the end of that month. In March 2017, several sources had reported that four of the current main cast members in particular - Morrison, Carlyle, Lana Parrilla, and Colin O'Donoghue - were in negotiations to renew their contracts for a potential seventh season. In April 2017, Horowitz and Kitsis confirmed that a group of characters will indeed have their stories wrapped up by the end of the season in acknowledging the potential cast changes, saying: "We planned this finale from the beginning of the year, so whoever stays and whoever goes... all those questions have already been dealt with. The audience does not have to fear (anything feeling) incomplete. '' On May 8, Morrison confirmed that she had declined an offer to remain on the show and that the sixth season would be her last, signaling the end of Emma 's time on the show as the main protagonist. On May 11, Rebecca Mader announced that she would also be leaving the show at the end of the season, citing creative decisions beyond her control. On May 12, season six was announced to be the last for four additional main actors: Ginnifer Goodwin, Josh Dallas, Jared Gilmore, and Emilie de Ravin. Thus, characters seeing their storylines wrapped up in the season six finale include Emma Swan, Snow White, Prince Charming, Belle, and Zelena. The season finale revealed that Henry Mills will remain a series protagonist, with the setting shifting to a later time period in which he is portrayed as an adult by Andrew J. West.
In January 2017, TV Line reported that the series would feature a musical episode later in the season. Creators Kitsis and Horowitz had spoken about the desire to do a musical installment previously, but did n't "even know where to begin. '' Kitsis and Horowitz confirmed the report in February. They have also since confirmed that the episode will feature 7 original songs, including solos by Jennifer Morrison and Rebecca Mader, as well as a "sing - off '' musical number featuring the Evil Queen and the Charmings. Composers Alan Zachary and Michael Weiner will be writing the original numbers featured, with arrangements provided by the show 's long - time composer Mark Isham. Kitsis and Hororwitz spoke about the nature of the episode saying, "It actually is a huge part of the mythology of the show and there are some big things that happen in the episode, frankly, it 's been one of the challenges of doing the musical because we never wanted to do something where it was just like a to - the - side, one - off thing, and then get back to the main story. We want to see it part of the main story, which meant we had to really plan out this season with some great detail. ''
Emilie de Ravin told fans that she will be returning for the sixth season. It was also announced later that Sam Witwer and Hank Harris have signed a contract to play two new recurring characters on the sixth season of the show. On March 29, 2016, Lana Parrilla confirmed she would be returning for the sixth season. Robert Carlyle was confirmed to be returning for the sixth season as Rumplestiltskin along with Rebecca Mader as Zelena and Jared Gilmore as Henry Mills.
It was announced that Giles Matthey was cast as Morpheus, who is slated to appear in the first episode of the season. On July 20, it was announced that Craig Horner would be portraying the Count of Monte Cristo, who was introduced in the second episode of the season. At the 2016 San Diego Comic Con International it was revealed that the season would see the introduction of Aladdin (Deniz Akdeniz), and his story featuring the return of Jafar, now portrayed by Oded Fehr (the role had been previously played by Naveen Andrews on Once Upon a Time in Wonderland). Andrews was unavailable due to his previous commitment to Netflix 's Sense8. It was also announced that Galavant 's Karen David was cast as Princess Jasmine. She made her debut in the fourth episode of the season.
In early July, it was announced that Raphael Sbarge would be reprising his role as Jiminy Cricket / Dr. Archie Hopper in the season premiere. The character 's last appearance was in the season 4 episode "Rocky Road. '' On July 20, it was announced that Jessy Schram would be reprising her role as Cinderella in the third episode of the season. The episode explored a connection that her character has to someone from the Land of Untold Stories, as well as the start of her friendship with Snow White. That same episode also introduced Cinderella 's stepmother and stepsisters. David Anders returned this season as Victor Frankenstein. The character has a connection with Jekyll and Hyde. On August 15, it was announced that Jonny Coyne would be reprising his role as Dr. Lydgate from Once Upon a Time in Wonderland, in the fourth episode of the season. On August 26, it was announced that Faran Tahir was cast as Captain Nemo, and would have ties to Captain Hook. On September 22, it was announced that Tzi Ma would be reprising his role as the Dragon sometime in the season. Gabrielle Rose would also be returning as David 's mother, Ruth, who appeared via flashback in episode 7. On September 27, it was announced that Sean Maguire would be returning as Robin Hood. The former main character, who died on - screen near the end of season 5, will not be brought back from the dead but would appear in a multi-episode arc in a different capacity.
For the second half of the season, Wil Traval would be returning for multiple episodes, starting in episode 11, as the Sheriff of Nottingham. On October 31, it was announced that Mckenna Grace would be returning as a younger version of Emma. On December 11, Robert Carlyle revealed that Brandon Spink would be portraying a young Baelfire in episode 13. On January 6, 2017, it was announced that JoAnna Garcia would be returning as Ariel. The story line involves a team - up between Ariel, Jasmine, and Hook. On January 7, Horowitz confirmed that Gil McKinney would also be returning as Prince Eric. Two days later, it was announced that Rose McIver would also be coming back as Tinker Bell. On January 20, it was revealed that Sara Tomko was cast as Tiger Lily. The recurring character appears in at least two episodes starting in episode 17. On January 23, Horowitz announced that Patrick Fischler would return as former author Isaac Heller at some point in the second half of the season. On March 14, it was announced that episode 18 would introduce the Tin Man, played by Alex Désert, and the Cowardly Lion from the Wizard of Oz tale.
On February 16, TVLine released casting call descriptions for two characters who appear in the season 6 finale, with potential to continue into season 7 if the show is to be renewed. One is a man in his late 20s - early 30s who "was once optimistic and hopeful but now is a friendless, cynical recluse '' but "still possesses a dormant, deep - seated spark of hope that waits for the right person to reignite it. '' The other is a 10 - year - old girl who "comes from a broken home '' but "those struggles have only made her stronger -- something which will come in handy when darkness threatens everything she holds dear. '' On March 8, it was announced that Andrew J. West had been cast in the unidentified male role, later revealed in "The Final Battle '' as an adult Henry Mills. On March 9, it was announced that Alison Fernandez had been cast in the unidentified female role, also revealed to be Henry 's daughter, Lucy.
On May 8, 2017, Jennifer Morrison announced that she had declined an offer to remain on the show through season 7 and would not be returning to Once Upon a Time as a series regular in the event the series was renewed by ABC. However, Morrison noted that she had signed a contract to appear in one episode in season 7.
On May 12, 2017, showrunners Horowitz and Kitsis confirmed that five more cast members, in addition to Morrison, would not be returning to the show for a seventh season: Ginnifer Goodwin, Josh Dallas, Jared Gilmore, Emilie de Ravin, and Rebecca Mader. In their personal goodbyes to fans, both de Ravin and Mader cited the show 's decision to move forward in a new creative direction as the reason for their departures. Gilmore expressed similar sentiments. Meanwhile, Goodwin and Dallas had informed the showrunners a year prior that they intended to leave the show at the end of the sixth season.
|
when was the best animated feature film oscar introduced | Academy award for Best animated Feature - wikipedia
The Academy Awards are given each year by the Academy of Motion Picture Arts and Sciences (AMPAS or the Academy) for the best films and achievements of the previous year. The Academy Award for Best Animated Feature is given each year for animated films. An animated feature is defined by the Academy as a film with a running time of more than 40 minutes in which characters ' performances are created using a frame - by - frame technique, a significant number of the major characters are animated, and animation figures in no less than 75 percent of the running time. The Academy Award for Best Animated Feature was first awarded in 2002 for films made in 2001.
The entire AMPAS membership has been eligible to choose the winner since the award 's inception. If there are sixteen or more films submitted for the category, the winner is voted from a shortlist of five films, which has happened nine times, otherwise there will only be three films on the shortlist. Additionally, eight eligible animated features must have been theatrically released in Los Angeles County within the calendar year for this category to be activated.
Animated films can also be nominated for other categories, but have rarely been so; Beauty and the Beast (1991) was the first animated film ever nominated for Best Picture. Up (2009) and Toy Story 3 (2010) also received Best Picture nominations after the Academy expanded the number of nominees from five to ten.
Waltz with Bashir (2008) is the only animated film ever nominated for Best Foreign Language Film (though it did not receive a nomination for Best Animated Feature). The Nightmare Before Christmas (1993) and Kubo and the Two Strings (2016) are the only two animated films to ever be nominated for Best Visual Effects.
For much of the Academy Awards ' history, AMPAS was resistant to the idea of a regular Oscar for animated features, considering there were simply too few produced to justify such consideration. Instead, the Academy occasionally bestowed special Oscars for exceptional productions, usually for Walt Disney Pictures, such as for Snow White and the Seven Dwarfs in 1938, and the Special Achievement Academy Award for the live action / animated hybrid Who Framed Roger Rabbit in 1989 and Toy Story in 1996. In fact, prior to the creation of the award, only one animated film was nominated for Best Picture: 1991 's Beauty and the Beast, also by Walt Disney Pictures.
By 2001, the rise of sustained competitors to Disney in the feature animated film market, such as DreamWorks Animation (founded by former Disney executive Jeffrey Katzenberg), created an increase of film releases of significant annual number enough for AMPAS to reconsider. The Academy Award for Best Animated Feature was first given out at the 74th Academy Awards, held on March 24, 2002. The Academy included a rule that stated that the award would not be presented in a year in which fewer than eight eligible films opened in theaters.
People in the animation industry and fans expressed hope that the prestige from this award and the resulting boost to the box office would encourage the increased production of animated features. Some members and fans have criticized the award, however, saying it is only intended to prevent animated films from having a chance of winning Best Picture. DreamWorks had advertised heavily during the holiday 2001 season for Shrek, but was disappointed when the rumored Best Picture nomination did not materialize, though it was nominated for and ended up winning the inaugural Best Animated Feature award.
The criticism of Best Animated Feature was particularly prominent at the 81st Academy Awards, in which WALL - E won the award but was not nominated for Best Picture, despite receiving widespread acclaim from critics and audiences and being generally considered one of the best films of 2008. This led to controversy over whether the film was deliberately snubbed of the nomination by the Academy. Film critic Peter Travers commented that "If there was ever a time where an animated feature deserved to be nominated for Best Picture, it 's WALL - E. '' However, official Academy Award regulations state that any movie nominated for this category can still be nominated for Best Picture. There have been complaints that the Best Animated Feature award is held in unfairly low regard by Academy members with many members refusing to vote for films they consider mere children 's fare beneath them, or letting their own children see the films and go with their opinions instead. The dominance of Disney and Pixar allegedly as a result of this bias is suggested to be injuring the credibility of the award.
In 2009, when the nominee slots for Best Picture were doubled to ten, Up was nominated for both Best Animated Feature and Best Picture at the 82nd Academy Awards, the first film to do so since the creation of the Animated Feature category. This feat was repeated the following year by Toy Story 3. Since 2010 onward, with the increasing competitiveness of the Animated Feature category, Pixar (a perennial nominee) did not receive nominations for several recent films considering the studio has released films of more mixed critical reaction and box office earnings, while Pixar 's sister studio Disney Animation won their first three awards.
In 2010, the Academy enacted a new rule regarding the motion capture technique employed in films such as Robert Zemeckis ' A Christmas Carol and Steven Spielberg 's The Adventures of Tintin, and how they might not be eligible in this category in the future. This rule was possibly made to prevent nominations of live - action films that rely heavily on motion capture, such as James Cameron 's Avatar.
When the category was first instated, the nomination went to the person (s) most involved in creating the film. This could be the producer, the director, or both. For the 76th Academy Awards in 2003, only the director (s) of the film received the nomination. For the 86th Academy Awards ten years later, this was amended to include one producer and up to two directors.
The Academy Awards have also nominated a number of non-English language films.
All films produced by Studio Ghibli. All the Japanese films on this list have also been released with English language dubbing.
|
calvin harris feat rihanna this is what you came for | This Is What You Came for - wikipedia
"This Is What You Came For '' is a song by Scottish DJ and record producer Calvin Harris, featuring Barbadian singer Rihanna. The song was released on 29 April 2016, through Columbia Records and Westbury Road. Featuring influences of house music, the song was written and produced by Harris and co-written by Taylor Swift who was initially credited under the pseudonym Nils Sjöberg. Rihanna and Harris had previously collaborated on her sixth studio album, Talk That Talk, which included the international chart - topper "We Found Love '' and US top five single "Where Have You Been '', the former of which was written and produced by Harris. He played the final version for Rihanna at the 2016 Coachella Music Festival.
The single debuted at number two on the UK Singles Chart. It peaked at number three on the US Billboard Hot 100, becoming Rihanna 's 21st top - five song and Harris 's second; the song is currently Harris 's highest peaking single as a lead artist. It also reached number one on the US Hot Dance / Electronic Songs, became the 12th number one for Rihanna and Harris 's tenth on sister chart Dance / Mix Show Airplay. and became Rihanna 's 25th and Harris 's fourth chart - topper on the Dance Club Songs. It topped the charts in Australia, Canada, and the Republic of Ireland and peaked within the top ten of the charts in Germany, New Zealand and Switzerland.
"This Is What You Came For '' received mixed reviews from critics; while some praised them for creating a catchy track, others labeled it as boring, and cited Rihanna 's vocals as overprocessed. A music video for the song premiered on 17 June 2016 and features Rihanna in an open cube with images projected on the inside walls.
Calvin Harris presented the final recording to Rihanna, two weeks prior to its release. During his performance at Coachella, Harris shared the song with Rihanna and her manager in her trailer. Harris stated that he was "nervous '' to play her the song because he had "changed so many bits from when she first heard it. '' The song was released on 29 April 2016. The song marks the pair 's third collaboration, following the global chart - topper "We Found Love '' and the Billboard Hot 100 top - five single "Where Have You Been '', which are both featured on Rihanna 's sixth studio album, Talk That Talk (2011).
Harris and Nils Sjöberg were credited as the song 's writers. On 13 July 2016, TMZ reported that the track was co-written with Harris 's then - girlfriend Taylor Swift, who used the pseudonym Nils Sjöberg because they did not want their relationship to overshadow the song. The track became a point of contention upon its release, when Harris -- in response to being asked about the possibility of collaborating with Swift during an interview with Ryan Seacrest -- said that he "ca n't see it happening. '' Harris also took to his Twitter account to confirm that Swift wrote the lyrics and contributed some vocals, while he "wrote the music, produced the song, arranged it and cut the vocals '', and also confirming her previous request for secrecy as co-songwriter. The credit has since been officially changed to "Taylor Swift '' in BMI 's and ASCAP 's entry for the song.
"This Is What You Came For '' is an EDM song. Gil Kaufman of Billboard stated that the song is "a chilled - out, joyful club track that nods to classic Chicago house from the late 1980s and early 1990s, but with a modern, poppier flavor. '' The song is written in the key of A minor with a tempo of 124 beats per minute. The song follows a chord progression of Am -- Fmaj7 -- G -- C, and Rihanna 's vocals span from G to E.
The official music video for the song directed by Emil Nava with Director of Photography Martin Coppen and edited by Ellie Johnson, was released to YouTube on 16 June 2016. In it, a giant white box is shown sitting in a variety of places, such as a misty field and a forest. The scene then cuts to Rihanna, dressed in a sparkly blue jumpsuit, singing while standing and dancing inside the box. While she performs, graphics are projected onto all 5 sides around her. Lasers were the only other effects used with the video projections, and were provided by Dynamic FX. These include a variety of video effects and designs, footage of a crowd partying and running horses, and a drawing of a mountain with lightning over it. The video technology used for projections blended all angles of the backing video, so it would be displayed without skewing the image and allowing it to play back in a 3D environment. This new technology was being used for the first time on this video. This video also had many scenes and different cube designs cut from it, as the filming was cut short due to time constrains. Calvin Harris makes a brief cameo appearance in the video, driving a sports car (Lamborghini Aventador). As the video ends, Rihanna walks outside, revealing the box has been set up on a dark, deserted soundstage. Two months after its release, the video reached 500 million views, and on 29 November 2016, the video reached one billion views. As of January 2018, it has reached 1.9 billion views, and is the site 's 22nd most viewed video.
"This Is What You Came For '' received mixed reviews. Lary Barleet from NME stated that "As it builds to a climax, he pulls it right back, eschewing the EDM drop in favour of voguish and mellow tropical house. It will definitely be a hit, however by manipulating her voice so much he 's stripped away the personality that made ' We Found Love ' such a standout, and her latest album ' Anti ' so compelling. The result is surprisingly soulless. '' Robbie Daw from the Idolator gave the song a positive review, stating, "The song is pretty decent Euro - centric dance floor fodder. It 's just a bit subdued - which might have been the correct decision; why attempt to top a pop classic and fail when you can zig left, aiming for other territory altogether? ''
Ryan Middleton from Music Times states "The song starts out with Rihanna 's vocals and mostly cooing with a soft build that "drops '' into a steady house, four - on - the - floor rhythm and deep bass stabs, using Rihanna 's voice as the main element for the melody. It does n't have the soaring, pop radio synths one might expect from these two, but then again this is n't 2011 and a simple revamp of ' We Found Love, ' would be disappointing. The drop may be a little disappointing for fans upon first listen, but give a second, third or fifth spin and there is an earworm quality to this one that will stick with you for a while. It may not have the lasting power of ' We Found Love, ' but it will dominate the radio all summer. '' He praised the song for creating a catchy hit, however he criticized Harris for failing to create anything new. NPR listed it at 89 on its "Top 100 Songs of 2016 '' list, while Fuse considered it the 10th best song of the year.
"This Is What You Came For '' debuted at number two on the UK Singles Chart. The single debuted at number nine on the US Billboard Hot 100 and later peaked at number three on the chart, becoming Rihanna 's 28th top ten hit in the US, Harris 's fourth, while on the magazine 's Dance / Mix Show Airplay it extended Rihanna 's number - one streak on that chart to 12 (the most of any artist on that chart) and gave Harris his tenth (the most among male artists, but at the same time extended his most weeks at number one on the chart among artists to 73). "This Is What You Came For '' became Rihanna 's 25th and Harris 's fourth number one on the Hot Dance Club Songs Chart, where it became the first song since 2013 's "Wake Me Up '' to stay at number one for two weeks. As of October 2016, the single has sold 1.2 million copies in the United States.
"This Is What You Came For '' debuted at number one on the Scottish Singles Chart, and peaked at number one in Australia, Canada and Ireland. The song also reached the top 10 in Germany, Ireland, New Zealand, Switzerland, Belgium, Denmark, Finland, France, Hungary, Italy, the Netherlands, Norway, Portugal and Slovakia. The song has reached the top 10 in all except three of the countries it has charted in.
Taylor Swift performed "This is What You Came For '' on the piano for the first time live at the United States Grand Prix in Austin, Texas, on October 22, 2016. Swift then performed an acoustic guitar version at the DirecTV Super Saturday Night in Houston, Texas, on February 4, 2017.
sales figures based on certification alone shipments figures based on certification alone sales + streaming figures based on certification alone
|
real name of hanuman in karmphal data shani | Shani (TV series) - Wikipedia
Karmaphal Daata Shani (English: The Divine Judge of deeds Shani) is an Indian Hindu mythological television series, which premiered on 7 November 2016 and is broadcast on Colors TV. The series is produced by Swastik Productions of Siddharth Kumar Tewary. The series airs every Monday to Friday 9.00 pm
The series has dubbed into Telugu on Gemini TV and it airs every Monday to Saturday 8: 30 pm from 24 July 2017.It has also dubbed into Malayalam on Surya TV and it is aired on every Monday to Saturday 8: 30pm from 24 July 2017. The series has been remade in Kannada with different actors which is called "Shani '' and is available on Voot for regional audiences.
The story of the series is based on the life of God Shani, who is known for his wrath. The serial will also depict the deities Vishnu and Shiva as Shani 's mentors. It also shows how tough Shani 's childhood appeared
When there is a battle between the gods and the demons, Mahadev appears and reveals that soon the god of karmaphal is going to take birth.
Meanwhile, Sanghya, the wife of Surya can not bear Surya 's heat. She visits her father Vishwakarma and she steals a potion which makes her shadow (Chayya) come to life.
Chhaya gives birth to a boy but Surya, the Sun God, does not accept the boy as his son because of his dark complexion. He tells Chayya that the boy should not be in his sunrays lest he would be burnt to ashes So Chayya takes the boy to a forest where she names him as Shani and later on, Mahadeva reveals that Shani is Karmafal Daata
The show then covers the different incidents of Lord Shani 's life, unfolding each chapter from Shrapit yoga to Karmfal data Shani and to Dandnayak Shani. The show focuses on removing the misunderstanding about lord Shani. It showed the last story of his childhood about his sister Bhadra.
After Shani 's mother Chayya had vanished, Shani became sad and heartbroken meets Rahu who starts controlling Shani 's mind, Shani in anger of his mother 's death starts fighting the Devas. Mahadev sends Nandi to bring Shani to him but Shani defeats him in a war, Shiv becomes very angry and sends Veerbhadra to bring Shani dead or alive. But Shani defeats him too. After this a confrontation takes place between Shiva and Shani. Shiva removes all the negativity from Shani 's mind and Rahu 's effects and tells him that he is born to be the Lord of the deeds and gives him the Vakra Drishti (the eyesight of justice).
Mahadev Tells Shani the aim of his existence Shani is known between the gods for his impartial and strict justice. Shani becomes the taskmaster and insures everyone gets the fruit of their deed. The first justice is served by Shani when he punishes his father, Surya Dev. Devi Sanghya wants Shani 's death but no plan of her works in front of the taskmaster. Shani now in no relation with anyone after this becomes so impartial that even the Trinity is satisfied with this. Then the series shows different justice incidents as he did to harishchandra, Ganesha, hanumaan, Chandra Dev, Indra Dev, Rahu, Devi sanghya, dumbnaad, maali sumaali and many more.
On Vishnu 's command, Shani sacrificed his friendship with Hanuman so he could choose his own path. Hanuman got cursed by Matang Rishi to lose his memory, and he and Shani parted ways.
Coming back to Bhadra, she realised that she posed a threat to the universe. She ends up killing herself in such a way that Chhaya believe that it was Shani who murdered her. Shani kills Sangya for her misdeeds but also ends up hurting Suryadev. Tridev turn Suryadev into a ball of energy. Chhaya takes away the position of Karmphal Daata from Shani and banishes him from Suryalok.
Shani renounces everything and tells Tridev that he does n't care about the world anymore. Elsewhere, Chhaya sends off Yam and Yami with Devguru Brihaspati to complete their higher studies. Mahadev also goes into Samadhi. The story then takes a leap of 10 years.
Grown up Shani is in deep meditation. Yam and Yami return to Suryalok and Chhaya announces that they can do the Yagya for Suryadev 's revival. Indradev and Rahu plan against the Surya family. Shani wakes up from meditation.
Chhaya refuses to talk about Shani. Yami refuses to believe the rumours about Shani. Everyone in Suryalok gather for the Uttarayan yagya. Indra instigates everyone against Shani through his maya. Kakol protests but is attacked by the soldiers.
Shani comes to Kakol 's rescue and everyone is stunned by him. He gets emotional when Yami asks for his identity. As Shani gets into a fight with Yam, Chhaya demands to know his identity. Shani reveals himself.
Chhaya, though pretending to be a hardcore, shows signs of missing Shani and his childhood.
Shani battles with all the planets to win Suryadev 's throne in order to save it from the clutches of Indradev. Shani defeats all the Grahas and impresses Mahakaali with his logical reasonings.
Mahakaali announces Shani as ' Lagnesh ' and offers him to give back his childhood with all the happiness but Shani instead chooses to revive Suryadev for the sake of the universe. After Suryadev 's revival, no one acknowledges Shani 's sacrifice. then when suryadev was awakened from his sleep he then punishes shani remembering the massive destruction caused by shani in his past years.
Gandharva Raj Chitrarath comes to Suryalok with his daughter Dhamini with an intention to get her married to Yam. Dhamini gets into an argument with Shani after the latter condemns her father for insulting Chhaya.
Chhaya requests Dhamini to stay in Suryalok till the time of Yami 's marriage with Mangala. Dhamini accepts and Chitrarath also starts planning for her to get married to Yam. However, Dhamini is seen in a dilemma and Chhaya tries to sympathise with her.
Shani tells Dhamini that her presence does n't affect him in any way and he does n't care about whether she 'll stay or not. He stuns her by saying that he knows the real reason behind her stay and advices her to choose the right path instead of the opposite.
Eventually Dhamini reveals about her life to Chhaya, telling her how her mother was killed by the Dhamin Rakshas and hence she was named "Dhamini ''. Chhaya consoles Dhamini and accepts her as her new daughter.
Eventually Dhamini falls in love with Shani. She tries to propose Shani but each time there used to be some or other difficulties which used to always stop her from proposing him.
Shani then organizes a grand Swayamvar for Dhamini in Suryaloka. Soon after one more rescue, Dhamini proposes him and asks him for marriage, on Yami 's encouragement. But, Shani politely refuses her proposal stating that, he has renounced all his relations and is not interested to get engaged in relationships. Disheartened Dhamini, then unwillingly chooses Yamraj as her husband in the Swayamwar organized by Shani, to fulfill her father Gandharvraj Chitrarath 's dream of making herself as the Daughter - in - law of Surya family.
Everyone accepts her decision and she gets engaged with Yamraj. On the other hand, Yami gets furious on Shani for refusing her proposal. Lord Vishnu appears and enlightens Shani that, Dhamini will be his power to fulfill his duties and will help him not get swayed from his path.
Indradev, plans to insult Dhamini before her wedding with Yamraj, for not accepting his favor of turning herself as an Apsara of Indralok. Shani who had promised Chhaya that, he will strive to save Dhamini 's honour at any cost, explodes all the plans of Indradev.
On the other hand, Vishwakarma calls upon Suryadev, Chhaya and Chitrarath and informs them that, Dhamini has to undergo a lot of hardships before her wedding and her wedding with Yamraj can not happen, thereby shocking them all.
But, Dhamini 's innerself longs for Shani, but she pretends to the outside world that she is happy in wedding with Yam. Indradev tries to disgrace Dhamini to make her dance in front of his mates, but Shani opposes the same. Dhamini accepts Indradev 's idea to tease Shani. Shani gets angry on Dhamini and says "He hates people who pretend ''.
Tridev state that, "The longer the duo move away from each other, the more they get closer to each other ''.
|
which subunits of the recbcd trimer show helicase structure and function | RecBCD - wikipedia
RecBCD (EC 3.1. 11.5, Exonuclease V, Escherichia coli exonuclease V, E. coli exonuclease V, gene recBC endoenzyme, RecBC deoxyribonuclease, gene recBC DNase, gene recBCD enzymes) is an enzyme of the E. coli bacterium that initiates recombinational repair from potentially lethal double strand breaks in DNA which may result from ionizing radiation, replication errors, endonucleases, oxidative damage, and a host of other factors. The RecBCD enzyme is both a helicase that unwinds, or separates the strands of DNA, and a nuclease that makes single - stranded nicks in DNA.
The enzyme complex is composed of three different subunits called RecB, RecC, and RecD and hence the complex is named RecBCD (Figure 1). Before the discovery of the recD gene, the enzyme was known as "RecBC. '' Each subunit is encoded by a separate gene:
Both the RecD and RecB subunits are helicases, i.e., energy - dependent molecular motors that unwind DNA (or RNA in the case of other proteins). The RecB subunit in addition has a nuclease function. Finally, RecBCD enzyme (perhaps the RecC subunit) recognizes a specific sequence in DNA, 5 ' - GCTGGTGG - 3 ', known as Chi (sometimes designated with the Greek letter χ).
RecBCD is unusual amongst helicases because it has two helicases that travel with different rates and because it can recognize and be altered by the Chi DNA sequence. RecBCD avidly binds an end of linear double - stranded (ds) DNA. The RecD helicase travels on the strand with a 5 ' end at which the enzyme initiates unwinding, and RecB on the strand with a 3 ' end. RecB is slower than RecD, so that a single - stranded (ss) DNA loop accumulates ahead of RecB (Figure 2). This produces DNA structures with two ss tails (a shorter 3 ' ended tail and a longer 5 ' ended tail) and one ss loop (on the 3 ' ended strand) observed by electron microscopy. The ss tails can anneal to produce a second ss loop complementary to the first one; such twin - loop structures were initially referred to as "rabbit ears. ''
During unwinding the nuclease in RecB can act in different ways depending on the reaction conditions, notably the ratio of the concentrations of Mg ions and ATP. (1) If ATP is in excess, the enzyme simply nicks the strand with Chi (the strand with the initial 3 ' end) (Figure 2). Unwinding continues and produces a 3 ' ss tail with Chi near its terminus. This tail can be bound by RecA protein, which promotes strand exchange with an intact homologous DNA duplex. When RecBCD reaches the end of the DNA, all three subunits disassemble and the enzyme remains inactive for an hour or more; a RecBCD molecule that acted at Chi does not attack another DNA molecule. (2) If Mg ions are in excess, RecBCD cleaves both DNA strands endonucleolytically, although the 5 ' tail is cleaved less often (Figure 3). When RecBCD encounters a Chi site on the 3 ' ended strand, unwinding pauses and digestion of the 3 ' tail is reduced. When RecBCD resumes unwinding, it now cleaves the opposite strand (i.e., the 5 ' tail) and loads RecA protein onto the 3 ' - ended strand. After completing reaction on one DNA molecule, the enzyme quickly attacks a second DNA, on which the same reactions occur as on the first DNA.
Although neither reaction has been verified by analysis of intracellular DNA, due to their transient nature, genetic evidence indicates that the first reaction more nearly mimics that in cells. For example, RecBCD mutants lacking detectable exonuclease activity retain high Chi hotspot activity in cells and nicking at Chi outside cells. A Chi site on one DNA molecule in cells reduces or eliminates Chi activity on another DNA, perhaps reflecting the Chi - dependent disassembly of RecBCD observed in vitro under conditions of excess ATP and nicking of DNA at Chi.
Under both reaction conditions, the 3 ' strand remains intact downstream of Chi. The RecA protein is then actively loaded onto the 3 ' tail by RecBCD. At some undetermined point RecBCD dissociates from the DNA, although RecBCD can unwind at least 60 kb of DNA without falling off. RecA initiates exchange of the DNA strand to which it is bound with the identical, or nearly identical, strand in an intact DNA duplex; this strand exchange generates a joint DNA molecule, such as a D - loop (Figure 2). The joint DNA molecule is thought to be resolved either by replication primed by the invading 3 ' ended strand containing Chi or by cleavage of the D - loop and formation of a Holliday junction. The Holliday junction can be resolved into linear DNA by the RuvABC complex or dissociated by the RecG protein. Each of these events can generate intact DNA with new combinations of genetic markers by which the parental DNAs may differ. This process, homologous recombination, completes the repair of the double - stranded DNA break.
RecBCD is a model enzyme for the use of single molecule fluorescence as an experimental technique used to better understand the function of protein - DNA interactions. The enzyme is also useful in removing linear DNA, either single - or double - stranded, from preparations of circular double - stranded DNA, since it requires a DNA end for activity.
|
who played the father in papa don't preach | Alex McArthur - Wikipedia
Alex McArthur (born March 6, 1957) is an American actor.
He was born in Telford, Pennsylvania, the son of Bruce, a contractor, and Dolores McArthur. He studied acting at De Anza College and San Jose State University, and worked as a bartender at the legendary nightclub Studio 54 in New York.
He is well - known for playing the serial killer Charles Reece in the film Rampage, and the honest cowboy Duell McCall in the western TV film series Desperado, whose original screenplay was written by Elmore Leonard. He was nominated for Gemini Awards for Best Performance by an Actor in a Supporting Role, for the television film Woman on Trial: The Lawrentia Bemberke Story.
He also appeared in the music video for Madonna 's song "Papa Do n't Preach '', a segment of The Immaculate Collection video compilation, as Madonna 's boyfriend and the father of her child in the video.
|
the block season 1 contestants where are they now | The Block (Australian TV series) - Wikipedia
The Block is an Australian reality television series broadcast on the Nine Network. The series follows four or five couples as they compete against each other to renovate and style houses / apartments and sell them at auction for the highest price.
The original series first ran for two consecutive seasons in 2003 and 2004, and was originally hosted by Jamie Durie.
The Nine Network revived The Block after a six - year absence, with a third season commenced airing on 22 September 2010, this time hosted by television personality and builder Scott Cam. Shelley Craft joined as co-host from the fourth season.
The Block has a large number of commercial sponsors and prominently features brand sponsorships regularly throughout episodes.
The original format of the series featured four couples with a prior relationship renovating a derelict apartment block in the Sydney suburb of Bondi, with each couple renovating a separate apartment over a period of 12 weeks and with a budget of A $ 40,000. The apartments were then sold at auction, with each couple keeping any profit made above a set reserve price and the couple with the highest profit winning a A $100,000 prize. The current format is relatively the same except the series usually features five couples, it is mainly based in Melbourne suburbs and the budget is $100,000 +.
The first season was filmed at Bondi and the second at Manly. The third season again took place in Sydney, in the suburb of Vaucluse. Moving to Victoria, Australia, the fourth season was filmed in the Melbourne suburb of Richmond, Victoria on Cameron Street. Breaking with tradition, season four was filmed in four side - by - side houses as opposed to an apartment block of four. Season four saw television personality Shelley Craft join Scott Cam in hosting the show.
Season five began airing in April 2012. As with season four, season five has retained the four separate houses format, as opposed to four apartments in a single apartment block as in earlier seasons. Season five is again set in Melbourne, on Dorcas Street, and is set in four adjacent multi-storey town houses. Season six returned to Sydney, in Bondi, for the tenth anniversary. The program has remained in Melbourne since season seven.
The first three seasons of The Block aired once weekly for 13, 26 and 9 weeks respectively. Since season four, the program has aired across multiple nights per week.
The first season of The Block began airing on 1 June 2003 on the Nine Network, replacing Backyard Blitz and Location Location in the network 's flagship time slot of Sunday at 6: 30 to 7: 30 pm (AEST). The season was presented by Backyard Blitz host Jamie Durie and filmed in Bondi, New South Wales, with the majority of filming being completed prior to the series airing for editing purposes.
Selected from approximately 2,000 applicants, the four couples in the season were:
The auction profits had a combined result of $ 7005443000000000000 ♠ 443,000.
Following the success of the first season, an expanded second season of 26 episodes, airing twice weekly, premiered on 18 April 2004. The season was again set in Sydney, although in the suburb of Manly rather than Bondi where the first season was located.
The auction profits had a combined result of $ 7005155000000000000 ♠ 155,000 with two properties not selling at auction.
Selected from over 18,000 applicants, the four couples in the season were:
Two original contestants, Dani and Monique Bacha, left the program in January 2004, two weeks into the second season, when it was reported that Dani had spent six months in jail in 2002 following his conviction for a drug - related offence. Andrew Rochford and Jamie Nicholson replaced Dani and Monique Bacha.
After a long break, the series was revived in 2010 with a set of four apartments in the upmarket suburb of Vaucluse in Sydney being renovated and Scott Cam replacing Jamie Durie as host.
The auction profits had a combined result of $ 7005339500000000000 ♠ 339,500 with only one property not selling at auction.
Season four saw six major changes to the format of The Block.
Eight couples were initially selected, with four being eliminated and the other four being given keys to the houses. The four remaining teams are:
The winners of The Block were Polly and Waz. They made $15,000 in profit and due to the other three couples ' houses being passed in, they also won the $100,000 grand prize making them the winners. Also in this finale episode, Josh proposed to Jenna, his girlfriend for five years and partner on The Block.
This was the worst auction in the history of The Block with auction profits having a combined result of $ 7004150000000000000 ♠ 15,000 due to only one property selling at auction.
Polly and Waz were the only couple whose property sold at auction, with the other three failing to meet their reserve prices. Following the auction, Amie and Katrina 's property sold for their exact reserve amount, meaning they would not take any winnings from appearing on The Block. Also, Rod and Tania 's property sold for the highest profit on The Block, at $72,000 (however, as it was after the auction, Polly and Waz are still the winners).
This season, like season four, was also based in Melbourne, in the inner city suburb of South Melbourne with four double storey side by side terrace houses located at 401 -- 407 Dorcas Street. The properties are all on separate titles with car access from Montague Street and plans approved to allow for a third story extension.
Eight couples were initially selected, with four being eliminated and the other four being given keys to the houses. The four remaining teams are:
The auction profits had a combined result of $ 7006172300101000000 ♠ 1,723,001.01
It was announced during the finale of the fifth season that an all - star edition of the series would air in 2013, with viewers able to vote for couples to return from past seasons. These votes were taken into consideration when selecting the contestants, and the four returning couples were announced in October 2012 as Phil and Amity (of season one), Mark and Duncan (season three), Josh and Jenna (season four) and Dan and Dani (season five). Phil and Amity won All Stars with a total of $1,670,000 selling for their home. The auction profits had a combined result of $ 7005815000000000000 ♠ 815,000.
Production for the series relocated from Melbourne -- which had hosted the prior two seasons -- to its original location of Bondi in Sydney to celebrate the tenth anniversary of the show 's first season. Filming took place over nine weeks from October to December 2012.
Darren Palmer, who was a guest judge in the fourth and fifth seasons, replaced John McGrath as a permanent judge for this season. Both Neale Whitaker and Shaynna Blaze reprised their roles as judges from the previous season.
The Nine Network renewed The Block for a seventh season to air after Easter in 2013. The location for this season was 142 Park Street, South Melbourne. The building consists of 5 levels, with each couple was allocated a full level to renovate. Alisa and Lysandra renovated level 1, Matt and Kim were responsible for level 2, level 3 was occupied by Bec and George, level 4 was completed by Madi and Jarrod and level 5 was made over by Trixie and Johnno.
Twin sisters Alisa and Lysandra won The Block, with a profit of $295,000. The auction profits had a combined result of $ 7006128300000000000 ♠ 1,283,000.
Alisa & Lysandra win The Block with a $4000 profit lead over Madi & Jarrod.
Applications for the eighth season of the series opened whilst the seventh season was airing, with couples aged between 18 and 65 years old being sought by casting agents. Filming for the season was scheduled to occur between November 2013 and January 2014, and aired from 27 January 2014.
Season 8 was based in the Melbourne suburb of Albert Park, Victoria. The production company paid $5.9 million for 47 O'Grady Street, a brick warehouse that was then transformed into four luxury apartments.
The working title of season 8 was "Fans vs Favourites '' as shown in the 2014 preview that was aired on the Big Brother 2013 finale. Returning to The Block were Brad and Dale (season 5) / Alisa and Lysandra (season 7). Joining The Block were The Super K 's -- Kyal and Kara, and The Retro Rookies -- Steve and Chantelle. Steve O'Donnell and Chantelle Ford won The Block with a profit of $636,000 + $100,000 winners ' prize money. The auction profits had a combined result of $ 7006232650000000000 ♠ 2,326,500.
In May 2014, it was reported that Lukas Kamay -- who had won the auction for Alisa and Lysandra 's apartment -- had been arrested for his involvement in an insider trading scam. As a result, the $500,000 deposit he had paid was frozen and the apartment was seized. The apartment will be re-sold at a later date. It 's unclear whether Alisa and Lysandra will receive the money they would have received had the scandal not broken, or if they are now considered to have come fourth and did not sell their apartment at auction.
The ninth season of The Block featured contestants renovating a former office building in Prahran into luxury apartments, with the season subtitled as The Block: Glasshouse. Filming began in April 2014. One of the contestants was former professional Australian rules footballer Darren Jolly and his wife Deanne as one of the couples. The season debuted on 27 July 2014 at the 6: 30 time slot.
Shannon and Simon Voss won The Block with a profit of $335,000 + $100,000 winners ' prize money, while Michael & Carlene and Darren & Deanne made the bare minimum of $10,000 above reserve. Even newly - weds Karstan & Maxine only netted $40,000 in winnings. The auction profits had combined results of $ 7005705000000000000 ♠ 705,000.
The Block was renewed for a Tenth season which was to begin airing on 27 January 2015.
The working title of season 10 was "Triple Threat '' as shown in the 2014 preview that was aired on 23 November 2014. It was premiered on 27 January 2015 where contestants renovate a former three - level block of flats. Darren & Deanne (season 9), Bec & George and Matt & Kim (both season 7) returned to vie for a spot as contestants. Former contestant Dan Reilly from seasons 5 and 6 returned, this time as an apprentice foreman ("foreboy '') under Keith 's guidance after Dan himself became a qualified builder, who was a qualified carpenter during his stints as a contestant.
Former contestants Darren & Deanne won the season with $835,000 + $100,000 prize money, and all contestants won over $665,000. This is the highest making combined auction profits with a result of $ 7006306500000000000 ♠ 3,065,000.
The Block was renewed for an eleventh season, which went into production in May 2015. Season 11 put the show ' back - to - basics ' after ratings declines during the tenth season; episodes were cut from 90 to 60 minutes, fewer episodes, the eliminations were removed and Thursday night episodes were dropped.
Filming for season 11 began on 15 May 2015. This season renovated the former Hotel Saville in South Yarra - an octagonal, eight floor brick building. The title for Season 11 is "The Block: Blocktagon ''.
Co-creator Julian Cress said that this season of The Block would have no tradies and only passionate do - it - yourself couples. In other seasons, at least one person in each team had a trade. The change came in the new direction in the back - to - basics change to the season. Cress said viewers would relate more to the characters who are big on spirit but small on skills when the show returned later in the year.
This season was sponsored by Mitre 10 (building equipment), Swisse Australia (vitamins), Aldi (groceries), Domain (money and apartment information), The Good Guys (electronics and kitchens) and Suzuki (transport).
Shay & Dean Paine won the season with $655,000 + $100,000 prize money, and all contestants won over $349,000. The auction profits had a combined result of $ 7006243900000000000 ♠ 2,439,000.
On 28 October 2015, Nine renewed the series for a twelfth season. Since 2013, the Nine Network had aired two seasons of the show each year. In 2016, however, this would be the one and only season airing, and it did not air until the last quarter of 2016. It was once again set in Melbourne. On 27 December 2015, Frank Valentic teased a video saying that there are rumours of The Block going to Greville Street, Prahran.
On 18 February 2016, it was reported that The Block 's producers had bought an old heritage - listed soap factory for $5 million at 164 Ingles Street, Port Melbourne. This address was confirmed by Scott Cam on The Today Show on 9 May 2016. Filming began on 26 May 2016. The series began airing in August 2016 and had 5 teams competing. The season premiered on Sunday 21 August 2016. This season 's contestants, Julia & Sasha, are the first ever female same - sex couple to compete on any season of The Block. The season concluded on 13 November 2016.
This season is sponsored by Mitre 10 (building equipment), Aldi (groceries), Domain (cash flow and apartment information), McCafé (snacks, beverages, and the McCafe Quality Award), Stayz (challenge and additional finale prizes), Suzuki (transport and viewer voting prize) and Telstra (Smart Home Technology and additional cash flow).
William Bethune & Karlie Cicero won the season with $715,000 + $100,000 prize money. Each team of contestants won $425,000 or more. The auction profits had a combined result of $ 7006283500000000000 ♠ 2,835,000.
On 8 November 2016, The Block was renewed for a thirteenth season at Nine 's upfronts. Applications for the thirteenth season of the series opened on 9 January 2017, with energetic couples aged between 18 and 65 years old being sought by casting agents. Filming for the season is scheduled to occur between April 2017 and July 2017. The casting call also specifies first round couples will be reduced to final participants in the first week of filming, which suggests Season 13 will feature an elimination round similar to that of Season 5.
On 11 March 2017, it was reported that a vacant block of land at 46 Regent Street, Elsternwick, had been purchased for $9.6 million back in December 2016, with plans approved to build a five lot subdivision, meaning for the first time ever, they would be building a property from the ground up, instead of renovating an existing building. On 17 March 2017, it was officially confirmed that 46 Regent Street Elsternwick will be the location of The Block 's thirteenth season with filming to begin on 27 April 2017, however they will not be building a property from the ground up, as five old rundown weatherboard houses are being relocated to the location, meaning this will be the first time since the sixth season the contestants will renovate a house.
In June 2017, Network Nine officially announced the location and teams for the thirteenth season of The Block, one team being Aussie Model Elyse Knowles and her Boyfriend Josh. The season will premiere on Sunday 30 July 2017.
This season is sponsored by Mitre 10 (building equipment), Domain (cash flow & house marketing), McCafé (snacks & beverages), Stayz (challenge & additional finale prizes), Volkswagen (transport & cash flow), Centrum (vitamins) & Youfoodz (groceries & meals).
Elyse Knowles & Josh Barker won the season with $447,000 + $100,000 prize money. Each team of contestants won $95,000 or more. The auction profits had a combined result of $ 7006122000000000000 ♠ 1,220,000.
On 21 November 2016, it was reported the producers were looking at The Gatwick Private Hotel at 34 Fitzroy Street, St Kilda as a possible site for its thirteenth season. On 17 December 2016, it was reported that The Block producers and the owners of The Gatwick Hotel are currently in negotiations on purchasing the property. On 4 March 2017, it was reported that the Gatwick Hotel deal did not go ahead and that the producers are searching for another property, however on 6 March 2017, it was reported the producers were in a closing deal to buy the property as residents of the building are to move out. On 17 March 2017, it was reported that The Gatwick Hotel would not be the location of The Block 's thirteenth season, although there was still interest in the building. On 22 March 2017, Nine officially confirmed that the building had been purchased and will be used for a future season of The Block.
In June 2017, The Block producers lodged renovation plans for The Gatwick Hotel with Port Phillip City Council. On 23 July 2017, the renovation plans were officially released, the information included that the network paid $10 million for the building (not $15 million as originally reported), it was confirmed that the site will have eight apartments, two - bedrooms on the ground level and three - bedrooms for the remaining homes, the existing roof to be partly demolished and a fourth floor built to include pergolas above outdoor terraces & a car park area would have six car spaces and bicycle parking, and each apartment would come with secure storage.
Applications for the fourteenth season of the series opened in August 2017 until 10 September 2017, looking for couples couples aged between 18 and 65 years old being sought by casting agents. Filming for the season was originally slated to occur between January 2018 and April 2018, but will begin filming on 8 February 2018. In October 2017, the fourteenth season and location of The Block were officially confirmed at Nine 's upfronts. In June 2018, Network Nine announced the teams for the fourteenth season of The Block. In July 2018, Network Nine announced the premiere date of Season 14 as Sunday, August 5th at 7pm.
In June 2018, it was reported that The Block producers had acquired a rundown backpackers accommodation, Oslo Hotel, at 38 Grey Street, St Kilda, through an off - market deal struck after the series approached one of the owners. The building contains five mansions hidden behind the facade. The series producers and building planners set to submit renovation plans to the City of Port Phillip council imminently.
The two first seasons were successful in the ratings, with the first season averaging 2.2 million viewers. Season 1 Finale was watched by 3.115 million viewers and Season 2 was watched by 2.273 million viewers.
The third season debuted with 1,134,000, a daily rank of 9. It lost to all its main time slot competition consisting of Glee on Network Ten and Border Security on the Seven Network. However, it remained successful with key demographics and enjoyed steady ratings throughout the season. Season 3 concluded with 1.712 million viewers, and was the top program of the night in total people and all key demographics. It was also the second most - watched program of the week.
Reno Rumble was a program that pit teams from The Block against teams from the Seven Network 's reality series House Rules. The program aired on the Nine Network and was produced by the same production company as The Block.
The series was renewed for a second season but did not involve former contestants from The Block or House Rules and was only produced by the Nine Network.
The Block has also been adapted in Russia, Norway, Romania, Sweden, Denmark, Finland, Iceland, United States and South Africa.
|
who won golden bat award in 2017 icc champions trophy | 2017 ICC Champions Trophy final - wikipedia
The final of the 2017 ICC Champions Trophy was played on 18 June 2017 between Pakistan and India at The Oval in London, to determine the winner of the eighth edition of the ICC Champions Trophy. Pakistan qualified for the final by defeating the hosts England convincingly by 8 wickets in the first semi-final at Cardiff in Wales on 14 June, and reached their maiden Champions Trophy final. India, the defending champions and favourites, came into the final by defeating Bangladesh comfortably by 9 wickets in the second semi-final at Birmingham on 15 June, to reach their fourth Champions Trophy final, a record.
In an unexpected performance, Pakistan beat India comfortably to win their maiden ICC Champions Trophy, outclassing them across all departments to win by 180 runs, which was the largest margin of victory in the final of an ICC ODI tournament. Pakistan, who were massive underdogs entering as the lowest - ranked team in the competition, became the seventh nation to win the Champions Trophy, and it was their first ICC ODI tournament title since 1992. Fakhar Zaman of Pakistan received the Man of the Match award for scoring a sublime 114. Shikhar Dhawan of India received the Golden Bat award for scoring 338 runs in the tournament while Hasan Ali of Pakistan received the Golden Ball award for taking 13 wickets; he was also adjudged the Man of the Series for his outstanding contribution towards Pakistan 's first ICC tournament title since 2009.
The traditional rivalry between both sides set the stage for a high - voltage clash. The match is estimated to have been watched by 400 million viewers, becoming the third most - watched game in cricketing history.
Pakistan and India share a historical rivalry in cricket. Prior to this match, the two sides had played 128 times against each other in ODIs, where Pakistan won 72 matches, India won 52 matches and four matches ended with no result. While Pakistan have had the upper hand bilaterally, India enjoyed an edge in global ICC tournaments where they won 13 times against Pakistan, and Pakistan won twice against India. The two sides met only twice before in the finals of global tournaments: the non-ICC World Championship of Cricket Final in 1985, and the 2007 ICC World Twenty20 Final.
Prior to this match, the teams had met four times in the Champions Trophy and had two victories each. Pakistan 's last win was in 2009; since then, India won seven games against Pakistan across ICC tournaments consecutively. Their most recent clash was on 4 June 2017, during the group stages of the ongoing Champions Trophy where India won by 124 runs (D / L method). Much of the pre-match analysis envisioned a strong contest between India 's batting lineup and Pakistan 's bowling side, both of which were considered the strengths of their respective teams and remained formidable in this tournament.
Ranked eighth in the ICC ODI Championship at the start of the tournament, Pakistan started poorly, before improving progressively in each game. They lost to India in the first game by 124 runs in a sloppy display, but then defeated top - ranked South Africa by 19 runs by virtue of Duckworth -- Lewis method in their next game. They gained momentum and beat Sri Lanka by 3 wickets in their final group game, a thrilling must - win encounter, and qualified for the semi-finals placed second in Group B, behind India on net run rate. In the semi-final, England with their undefeated run and home advantage were tipped firm favourites. However, they were outplayed by Pakistan with both bat and ball, the latter winning comprehensively by 8 wickets with almost 13 overs to spare. This paved the road for Pakistan 's first qualification to a Champions Trophy final.
India came into the tournament as defending champions and favourites along with England, and were ranked third in the ICC ODI Championship. They beat arch - rivals Pakistan convincingly in their first group face - off, winning by 124 runs. They lost their second match to Sri Lanka by 7 wickets, despite posting a total of 321, in what turned out to be the highest successful run - chase in Champions Trophy history. India won their final group game, a must - win encounter against South Africa, comfortably by 8 wickets. They finished on top of Group B with two wins and a net run rate ahead of Pakistan. In the semi-final, India faced Bangladesh, and put in yet another dominating display, winning comfortably by 9 wickets and sealing a final with Pakistan.
Marais Erasmus of South Africa and Richard Kettleborough of England were named as the on - field umpires for the final. They had both previously officiated in the semi-final matches of the tournament; Erasmus, in the England -- Pakistan match, and Kettleborough, in the Bangladesh -- India match. Rod Tucker of Australia and Kumar Dharmasena of Sri Lanka, who also officiated in the semi-finals as on - field umpires, were appointed as the TV umpire and reserve umpire respectively. David Boon of Australia was the match referee, completing the five - member match official team.
India remained unchanged from the side that played the semi-final, while Pakistan brought back their leading pacer Mohammad Amir, who was ruled out of the semi-final against England due to back spasm and replaced Rumman Raees. Indian captain Virat Kohli won the toss and elected his side to field first, sending Pakistan in to bat.
The Pakistani opening pair, Azhar Ali and Fakhar Zaman, put on 128 runs before Ali was run out for 59 runs off the last ball of the 22nd over. Zaman, who seemed to have been out for 3 runs, only for a no ball by Jasprit Bumrah to save him, continued on his way to a 92 - ball century -- his first at ODI level -- eventually falling to Hardik Pandya on the first ball of the 33rd over. He made 114 runs from 106 balls, which included twelve fours and three sixes. After his dismissal, the other Pakistani batsmen kept the score ticking over. Mohammad Hafeez plundered 57 not out from 37 balls, including four fours and three sixes. Pakistan eventually finished on 338 / 4 -- their second - highest ODI score against India -- after 50 overs. Bhuvneshwar Kumar was the pick of the Indian bowlers, finishing with 1 / 44 from 10 overs (including two maidens).
India started poorly, losing two early wickets to Mohammad Amir. Off the third ball of the game, Rohit Sharma was out leg before wicket for a three - ball duck. In the third over, Virat Kohli was dropped in the slips for just five runs but caught the next ball by Shadab Khan at point. Their poor form continued until, in the middle of the innings, Hardik Pandya and Ravindra Jadeja managed a rapid 80 - run partnership prior to Pandya being run - out. However this was India 's only batting highlight as the tail was quickly dismissed and India were all out after 30.3 overs, not even managing half of Pakistan 's total.
Fall of wickets: 1 -- 128 (Azhar Ali, 22.6 ov), 2 -- 200 (Fakhar Zaman, 33.1 ov), 3 -- 247 (Shoaib Malik, 39.4 ov), 4 -- 267 (Babar Azam, 42.3 ov)
Fall of wickets: 1 -- 0 (Sharma, 0.3 ov), 2 -- 6 (Kohli, 2.4 ov), 3 -- 33 (Dhawan, 8.6 ov), 4 -- 54 (Yuvraj Singh, 12.6 ov), 5 -- 54 (Dhoni, 13.3 ov), 6 -- 72 (Jadhav, 16.6 ov), 7 -- 152 (Pandya, 26.3 ov), 8 -- 156 (Jadeja, 27.3 ov), 9 -- 156 (Ashwin, 28.1 ov), 10 -- 158 (Bumrah, 30.3 ov)
Match officials
Key
The Pakistani team were greeted with a heroic welcome by fans upon their return home. Prime Minister Nawaz Sharif posted a congratulatory message on social media, and announced a cash reward of Rs 1 crore (US $95,000) for each player. A ceremony was held for the players at the Prime Minister 's Secretariat on 4 July. The property developer Bahria Town presented a sum of Rs 10 lakh (US $9,500) for every player, and awarded a one kanal plot to Fakhar Zaman for his performance.
In India, the loss was met with agitation by several fans. However, many Indians also commended Pakistan 's performance and expressed solidarity with the Indian team irrespective of the result. In Kashmir, widespread pro-Pakistan celebrations were reported amongst locals. Twenty - one Indian men who were allegedly celebrating Pakistan 's victory were charged under India 's sedition laws, and remanded in custody. The charges were dropped a few days later after the complainants accused the police of filing a "false case ''.
Two days after the match, India coach Anil Kumble stepped down from his position, amid reports of a rift between him and some of the players including captain Virat Kohli.
Pakistan 's ICC team ranking for ODIs improved from eighth to sixth position, jumping ahead of Sri Lanka and Bangladesh. In the bowlers ' rankings, Hasan Ali climbed 12 spots to reach seventh, while Babar Azam rose by three ranks to fifth on the batting rankings.
|
how tall do u have to be to ride kingda ka | Kingda Ka - wikipedia
Kingda Ka is a steel accelerator roller coaster located at Six Flags Great Adventure in Jackson, New Jersey, United States. It is the world 's tallest roller coaster, the world 's second fastest roller coaster, and was the second strata coaster ever built. It was built by Stakotra, a subcontractor to Intamin. Riders have to be 54 '' in order to be able to get on the roller coaster.
The train is launched by a hydraulic launch mechanism to 128 miles per hour (206 km / h) in 3.5 seconds. At the end of the launch track, the train climbs the main top hat tower, reaching a height of 456 feet (139 m) and spanning over a 3,118 - foot - long (950 m) track by the end of the ride.
Plans to build Kingda Ka were announced on September 10, 2003, at an event held for roller coaster enthusiasts and the media. The event revealed the park 's goal to build "the tallest and fastest roller coaster on earth '', reaching 456 feet (139 m) and accelerating up to 128 miles per hour (206 km / h) in 3.5 seconds. On January 13, 2005, Kingda Ka 's tower construction was completed, and on May 21, 2005, the ride opened to the public. Kingda Ka became the "tallest '' and "fastest '' roller coaster in the world, taking both world records from Top Thrill Dragster at Cedar Point. It lost the title of world 's fastest when Formula Rossa at Ferrari World opened in November 2010. Intamin designed both Kingda Ka and Top Thrill Dragster, and the two share a similar design and layout that differs primarily by the theme and the additional hill featured on Kingda Ka. Both rides were built by Stakotra and installed by Martin & Vleminckx.
On August 29, 2013, Six Flags Great Adventure officially announced Zumanjaro: Drop of Doom for the 2014 season. The new attraction was attached to the Kingda Ka coaster. The drop tower features three gondolas integrated into the existing structure which was also built by Intamin. Kingda Ka closed at the start of the 2014 season in order to construct Zumanjaro: Drop of Doom on to Kingda Ka. Kingda Ka reopened on weekends on Memorial Day Weekend and fully reopened when Zumanjaro: Drop of Doom was completed on July 4, 2014.
Kingda Ka 's layout and experience is nearly identical to Top Thrill Dragster. After the train has been locked and checked, it moves slowly out of the station to the launch area. It passes through a switch track, which allows four trains (on two tracks) to load simultaneously. When the signal to launch is given, the train rolls back slightly (to engage the catch car) and the brakes on the launch track retract. There is a voice that says, "Arms down, head back, hold on! ''. The launch occurs five seconds after the hissing sound of the brake fins retracting or the warning voice. Kingda Ka 's horn previously sounded before each launch, but it was silenced due to noise complaints from nearby residents; the horn now sounds only when Kingda Ka first launches after being idle for a length of time. When the train is in position, the hydraulic launch mechanism accelerates the train from 0 to 128 miles per hour (0 to 206 km / h) in 3.5 seconds. At the end of the launch track, the train climbs the main tower (or top hat) and rolls 90 degrees to the right before reaching a height of 456 feet (139 m). The train then descends 418 feet (127 m) straight down through a 270 - degree right - hand spiral. The train climbs the second hill of 129 feet (39 m), producing a moment of weightlessness before being smoothly brought to a stop by the magnetic brakes; it then makes a left - hand U-turn and enters the station. The ride lasts 28 seconds from the start of the launch.
Kingda Ka is themed as a mythical tiger, named for the 500 lb (230 kg) Golden Tabby Bengal tiger who lived in an adjacent exhibit before moving to the park 's safari. The ride 's sign and station have a Nepalese style. The queue line is surrounded by bamboo, which augments the jungle theme. Jungle music is played during the wait and throughout the Golden Kingdom section of the park, which was built for the ride.
The hydraulic launch motor is capable of producing 20,800 peak horsepower (15.5 MW). Because of the high speed and open nature of the trains, the ride will not operate in light rain.
Kingda Ka 's four trains are color - coded for easy identification (green, dark blue, teal, and orange) and are numbered; the four colors are also used for the seats and restraints. Each train seats 18 people (two per row). The rear car has one row, while the rest have two. The rear row of each car is positioned higher than its front row for better visibility.
Each of Kingda Ka 's trains has an extra row of seat mounts. The panels could be removed for the installation of additional seats in the future. This modification would increase the capacity of each train from 18 to 20, and the hourly capacity of the coaster from 1400 to 1600 riders per hour. Kingda Ka 's station is prepared for this modification, with entrance gates for the currently - nonexistent row of seats.
Kingda Ka 's over-the - shoulder restraint system consists of a thick, rigid lap bar and two thin, flexible over-the - shoulder restraints. Because the over-the - shoulder portions of the restraint are not rigid, the hand grips are mounted to the lap bar. Kingda Ka 's restraints are also held down by a belt, in case the main locking system fails. To speed loading, riders are asked to secure their own restraints if possible.
Kingda Ka 's station has two parallel tracks, with switch tracks at the entrance and exit. Each of the station 's tracks is designed to accommodate two trains, so each of the four trains can be operated from its own station. While all of the trains are mechanically identical and able to load and unload at each of the four individual station bays the original plan was for all trains to operate at the same time and for each train to load and unload at their own station. During normal operation, trains on one side are loaded while trains on the other side are launched. When both sides of the station are in use an employee directs riders in line to go to a particular side, where they can choose to sit in the front or rear of the train. During recent seasons it has become common that only one train bay (the forward one on the side opposite of the parking lot) be used for the loading, unloading, and dispatching of trains and that the other train or trains in operation on any given day wait either in the station behind a loading / unloading train or outside of the station on the brakes that follow the second hill. Two operators load, check and dispatch each train; another launches the trains. Kingda Ka 's music is by Safri Duo; almost their entire Episode II album is played in the queue and station. The other is the remix version of Survivor 's "Eye of the Tiger '', played by DJ Quicksilver. Both of these may be heard during the queue and in the station.
A train may occasionally experience a rollback following a launch. A rollback occurs when the train fails to make it over the top of the tower and descends back down the side it was launched. Kingda Ka includes retractable magnetic brakes on its launch track to prevent a train from rolling back into the loading station.
On June 8, 2005, a bolt failed inside a trough that the launch cable travels through. This caused the liner to come loose creating friction on the cable and preventing the train from accelerating to the correct speed. The rubbing of the cable against the inside of the metal trough caused sparks and shards of metal to fly out from the bottom of the train. The ride was closed for almost two months following the incident.
Damage occurred to the launch cable which was frayed and required replacement, to the engine including minor damage to seals, and to many of the brake fins. The brake fins in the launch section are mounted to keep fast - moving trains from moving backward into the station. However, the fast - moving train being pulled forward caused an unexpected stress on a number of fins bending them forward. Not all required replacement, but there were more damaged brake fins than Six Flags had replacements for. Extra brake fins had to be ordered from the manufacturer, Intamin in Switzerland, and the ride had to undergo thorough testing following the repair. Kingda Ka reopened on August 4.
Kingda Ka was struck by lightning in May 2009 and suffered serious damage. The ride was closed for three months for repairs and reopened on August 21, 2009.
On August 27, 2011, Kingda Ka suffered unspecified damage shortly before Hurricane Irene; that same day, Six Flags Great Adventure did not open due to the approaching hurricane. While it is unknown whether additional damage occurred due to the storm, the coaster was damaged to the extent that it could not run before Irene. Kingda Ka remained closed until the start of the 2012 operating season on April 5.
Shortly before 5: 00 pm on July 26, 2012, a young boy was sent to the hospital after suffering minor injuries from being struck by a bird during normal operation. The ride resumed normal operation shortly after the incident.
|
how many sherlock holmes movies starring basil rathbone | Sherlock Holmes (1939 Film series) - wikipedia
A series of fourteen films based on Sir Arthur Conan Doyle 's Sherlock Holmes stories were released between 1939 and 1946; the British actors Basil Rathbone and Nigel Bruce played Holmes and Dr. John Watson, respectively. The first two films in the series were produced by 20th Century Fox and released in 1939. The studio stopped making the films after these, but Universal Studios acquired the rights from the Doyle estate and produced a further twelve films.
Although the films from 20th Century Fox had large budgets, high production values and were set in the Victorian era, Universal Studios updated the films to have Holmes fighting the Nazis, and produced them as B pictures with lower budgets. Both Rathbone and Bruce continued their roles when the series changed studios, as did Mary Gordon, who played the recurring character Mrs. Hudson.
In the 1970s four of the Universal - produced films fell into the public domain when their copyright was not renewed. These four films were restored and colorized. Some of the films in the series had become degraded over time, with some of the original negatives lost and others suffering from nitrate deterioration because of the unstable cellulose nitrate film. The UCLA Film and Television Archive restored the series, putting the films onto modern polyester film, in a process that was jointly paid for by UCLA, Warner Bros. and Hugh Hefner.
In 1938 Basil Rathbone was cast as Sherlock Holmes for the 20th Century Fox adaptation of The Hound of the Baskervilles; Nigel Bruce was chosen to play Dr. John Watson. Darryl F. Zanuck, Gregory Ratoff and Gene Markey made the choice of Rathbone as Holmes during a conversation at a party in Hollywood. Filming began on 29 December 1938 under the direction of Sidney Lanfield and the film was released on 31 March 1939. Later that year a second film, The Adventures of Sherlock Holmes, followed, which was based on Sherlock Holmes, an 1899 stage play written by William Gillette. Although 20th Century Fox planned to make further Holmes films with Rathbone and Bruce, complications in negotiations between the company and the estate of the character 's creator, Arthur Conan Doyle, brought a premature end to the studio 's involvement; their decision to withdraw from further productions was also because the Second World War meant that "foreign agents and spies were much more typical and topical than the antiquated criminal activities of Moriarty and the like ''. On 2 October 1939, a month after the release of Adventures, Rathbone and Bruce resumed their roles on radio, in The New Adventures of Sherlock Holmes, with episodes written by Dennis Green and Anthony Boucher. Rathbone left the series in May 1946, while Bruce remained until 1947, with Tom Conway replacing Rathbone.
In February 1942, following negotiations with the Doyle estate, Universal Studios acquired the rights to the films and signed contracts with Rathbone and Bruce to continue their portrayals. Universal 's deal -- worth $300,000 -- was for seven years, and they purchased the rights to 21 stories in the canon in a contract that stipulated that the company had to make three films a year, of which two had to be adaptations of Doyle 's stories. Universal decided to update the stories to a Second World War setting, and the first film, Sherlock Holmes and the Voice of Terror -- based on Doyle 's 1917 story "His Last Bow '' -- was updated to a Second World War setting, with Holmes attempting to capture a Nazi agent. The change of era for Holmes is explained in the opening titles, with a caption that informs viewers that Holmes is "ageless, invincible and unchanging '', going on to say that he was "solving significant problems of the present day ''.
While the 20th Century Fox adaptations had high production values and big budgets, the Universal films changed the approach of the series, and aimed "simply to be entertaining ' B ' pictures ''. The second film produced by Universal, Sherlock Holmes and the Secret Weapon, was directed by Roy William Neill; he went on to direct the remaining ten films -- and produce the final nine -- in the Universal series.
Rathbone became frustrated with the role and left the series in 1946; he stated that his "first picture was, as it were, a negative from which I merely continued to produce endless positives of the same photograph ''. Universal considered replacing him on screen with Tom Conway -- as they subsequently did with the radio series -- but instead decided to end the series, despite still having the rights for the next three years. In October 1946, shortly after the end of the series, Neill died of a heart attack.
The writer David Stuart Davies concluded that Basil Rathbone was "the actor who has come closest to creating the definitive Sherlock Holmes on screen '', also describing the choice as "inspired ''. The historian Alan Barnes agrees, and wrote that "Rathbone was Sherlock Holmes ''. The choice of Nigel Bruce as Watson was more contentious, with Davies pointing out that "Bruce 's characterisation bore little relation '' to the written Watson, even though the portrayal eventually produced "an endearingly avuncular figure ''. The historian David Parkinson agrees, and wrote that Bruce 's "avuncular presence provided the perfect counterbalance to Rathbone 's briskly omniscient sleuth ''. Barnes notes that, despite the criticisms against him, Bruce rehabilitated Watson, who had been a marginal figure in the cinematic Holmes canon to that point: "after Bruce, it would be a near - unthinkable heresy to show Holmes without him ''. With the combination of Rathbone and Bruce, the historian Jim Harmon considered that this was "near perfect casting ''.
The series included continuity of two actors playing recurring characters: Mary Gordon, who played Mrs. Hudson, and Dennis Hoey, who portrayed Inspector Lestrade. Other recurring characters were played by numerous actors, with Professor Moriarty being played by three people: Lionel Atwill in Sherlock Holmes and the Secret Weapon, Henry Daniell in The Woman in Green and George Zucco in The Adventures of Sherlock Holmes. Some supporting actors reappeared in a number of roles in what Davies called the series ' "own little repertory company of actors ''; these included Harry Cording, who played seven roles in different films, and Gerald Hamer and Harold De Becker, who each played four roles, among others.
After Professor Moriarty is acquitted for murder, Holmes and Watson are visited by Ann Brandon, who tells the detectives that her brother Lloyd has received a strange note -- a drawing of a man with an albatross hanging around his neck -- identical to one received by her father just before his murder ten years previously: her brother is killed later that day. Holmes believes an attempt will be made on Ann 's life and he disguises himself as a music hall entertainer for a garden party, where he captures her assailant. The assassin is Gabriel Mateo, out for revenge on the Brandons for the murder of his father by Ann 's father in a dispute over ownership of their South American mine; Mateo reveals that it was Moriarty who urged him to seek revenge. Holmes realises Moriarty was using Ann 's attempted murder as a distraction from his real crime: an attempt to steal the Crown Jewels. Holmes goes to the Tower of London where Moriarty is masquerading as a policeman. The pair struggle, and Moriarty falls, presumably to his death.
During the Second World War, Holmes is consulted by the British Inner Council to capture a Nazi agent who broadcasts under the name the "Voice of Terror '', and who appears to be running a sabotage ring in England. After Gavin, one of his underworld contacts, is killed on his doorstep, Holmes convinces Kitty -- Gavin 's wife -- to find out the meaning of a clue Gavin had uncovered. She does so, and manages to inveigle her way into the house of Meade, the main Nazi agent in the ring. After being given a tip off from Kitty, Holmes takes the Inner Council to an abandoned church on the cost of southern England, where they thwart a German invasion. Holmes then uncovers the mole in the council, Sir Evan Barham, head of the council and the German spy Heinrich von Bork, who had been posing as Barham for the previous twenty years.
Four of the films -- Sherlock Holmes and the Secret Weapon, The Woman in Green, Terror by Night and Dressed to Kill -- are in the public domain. In 2006 the four films were digitally restored and computer colorized by Legend Films, who released the color and black and white films on DVDs.
The two 20th Century Fox films -- The Hound of the Baskervilles and The Adventures of Sherlock Holmes -- had survived complete and in good condition, but those in the Universal series suffered badly over the years, as they passed through the hands of different copyright owners. In 1993 the UCLA Film and Television Archive started a restoration project on the series after the unstable cellulose nitrate film was found to be suffering from deterioration. Restoration on the first six films -- The Woman in Green, The Pearl of Death, Sherlock Holmes and the Secret Weapon, The Scarlet Claw, Terror by Night and The Spider Woman -- took four years from 1993; the costs for the restoration were met by UCLA and Hugh Hefner, who was a fan of the Rathbone - Holmes series. From 1998 Warner Bros. matched Hefner 's funding and the remaining six films -- Dressed to Kill, Pursuit to Algiers, Sherlock Holmes Faces Death, The House of Fear, Sherlock Holmes and the Voice of Terror and Sherlock Holmes in Washington -- were then restored, a process that was completed in 2001.
The restoration involved transferring the films onto modern polyester film and restoring the images frame - by - frame. The process was complicated by the poor quality of some of the films. Robert Gitt, the UCLA Preservation Officer, commented that "the copies of the films that survive are many generations removed from the original and flaws have been photographed and re-photographed into these copies ''. The Scarlet Claw and Pursuit to Algiers were both in very poor condition and Dressed to Kill was missing some 35mm reels. This being the case, the restorers had to blow up some 16mm prints to replace the scenes.
Notes
References
|
run time on ant man and the wasp | Ant - Man and the Wasp - wikipedia
Ant - Man and the Wasp is a 2018 American superhero film based on the Marvel Comics characters Scott Lang / Ant - Man and Hope van Dyne / Wasp. Produced by Marvel Studios and distributed by Walt Disney Studios Motion Pictures, it is the sequel to 2015 's Ant - Man, and the twentieth film in the Marvel Cinematic Universe (MCU). The film is directed by Peyton Reed and written by the writing teams of Chris McKenna and Erik Sommers, and Paul Rudd, Andrew Barrer, and Gabriel Ferrari. It stars Rudd as Lang and Evangeline Lilly as Van Dyne, alongside Michael Peña, Walton Goggins, Bobby Cannavale, Judy Greer, Tip "T.I. '' Harris, David Dastmalchian, Hannah John - Kamen, Abby Ryder Fortson, Randall Park, Michelle Pfeiffer, Laurence Fishburne, and Michael Douglas. In Ant - Man and the Wasp, the titular pair work with Hank Pym to retrieve Janet van Dyne from the quantum realm.
Talks for a sequel to Ant - Man began shortly after that film was released. Ant - Man and the Wasp was officially announced in October 2015, with Rudd and Lilly returning to reprise their roles. A month later, Ant - Man director Reed was officially set to return; he was excited to develop the film from the beginning after joining the first film later in the process, and also to introduce Hope van Dyne as the Wasp in this film, insisting that she and Lang are equals. Filming took place from August to November 2017, at Pinewood Atlanta Studios in Fayette County, Georgia, as well as Metro Atlanta, San Francisco, Savannah, Georgia, and Hawaii.
Ant - Man and the Wasp had its world premiere in Hollywood on June 25, 2018, and was released on July 6, 2018, in the United States in IMAX and 3D. The film received praise for its levity, humor, and performances, particularly those of Rudd and Lilly, and has grossed over $622 million worldwide, making it the seventh - highest - grossing film of 2018.
Two years after Scott Lang was placed on house arrest because of his role in skirmishes between the Avengers in violation of the Sokovia Accords, Hank Pym and his daughter Hope van Dyne briefly manage to open a tunnel to the quantum realm, where they believe Pym 's wife Janet van Dyne might be trapped after shrinking to sub-atomic levels in 1987. Because he also visited the quantum realm, Lang is now quantumly entangled with Janet and receives an apparent message from her.
With only days left of house arrest, Lang sends a message to Pym about this despite the strained relationship they have due to Lang 's actions with the Avengers. Soon after, Hope and Pym kidnap Lang, leaving a decoy so as not to arouse suspicion from FBI agent Jimmy Woo. Seeing the message as confirmation that Janet is alive, the trio work to create a stable tunnel so they can take a vehicle to the quantum realm and retrieve her. They arrange to buy a part needed for the tunnel from black market dealer Sonny Burch, who it turns out has realized the potential profit that can be earned from Pym 's research and double - crosses them. Donning the Wasp outfit, Hope fights Burch and his men off until she is attacked by a quantumly unstable masked woman. Lang tries to help fight off this "ghost '', but she escapes with Pym 's lab, which has been shrunk down to the size and usefulness of carry - on luggage.
The three reluctantly visit Pym 's estranged former partner Bill Foster who helps them locate the lab. The ghost restrains Lang, Hope, and Pym when they arrive and reveals herself to be Ava Starr. Her father Elihas, another former partner of Pym 's, accidentally killed himself and his wife during a quantum experiment that caused Ava 's unstable state. Foster reveals that he has been helping Ava, whom they plan to cure using Janet 's quantum energy. Believing that this will kill Janet, Pym refuses to help them and the trio escape.
Opening a stable version of the tunnel this time, the three are able to contact Janet, who gives them a precise location to find her but warns that they only have two hours before the unstable nature of the realm separates them for a century. Burch learns their location from Lang 's business partners Luis, Dave, and Kurt, and informs a contact at the FBI. Luis warns Lang, who rushes home before Woo can see him violating his house arrest. This leaves Pym and Hope to be arrested, allowing Ava to take their lab.
Lang is soon able to help Pym and Hope escape custody, and they find the lab. Lang and Hope distract Ava while Pym enters the quantum realm to retrieve Janet, but the pair end up fighting Burch and his men, which allows Ava to begin taking Janet 's energy. Luis, Dave, and Kurt help incapacitate Burch and his men so that Lang and Hope can stop Ava. Pym and Janet arrive safely from the quantum realm and Janet voluntarily gifts some of her energy to Ava to temporarily stabilize her.
Lang returns home once again, in time for a now suspicious Woo to release him at the end of his house arrest. Ava and Foster go into hiding. In a mid-credits scene, Pym, Lang, Hope, and Janet plan to harvest quantum energy to continue helping Ava. While Lang is in the quantum realm doing this, the other three disintegrate.
Additionally Stan Lee, co-creator of the titular heroes, has a cameo in the film as a man whose car gets shrunk by accident. Michael Cerveris appears as Ava 's father Elihas Starr while Riann Steele plays his wife Catherine. Tim Heidecker and Brian Huskey appear in cameos as a whale boat captain named Daniel Gooobler and a teacher at Cassie 's school, respectively. Sonny Burch 's team of men includes Divian Ladwa as Uzman, Goran Kostić as Anitolov, and Rob Archer as Knox, while Sean Kleier portrays Stoltz, Burch 's FBI inside man and Jimmy Woo 's subordinate. Tom Scharpling and Jon Wurster of The Best Show make brief appearances as Burch 's SUV drivers.
-- Director Peyton Reed on the Wasp 's inclusion in the film
In June 2015, Ant - Man director Peyton Reed expressed interest in returning for a sequel or prequel to that film, saying that he had "really fallen in love with these characters '' and felt "there 's a lot of story to tell with Hank Pym ''. A month later, Pym actor Michael Douglas said he was not signed for any additional films, but "would look forward to more if it comes my way '', and expressed the desire to have his wife Catherine Zeta - Jones cast as Janet van Dyne for a potential follow - up. Evangeline Lilly -- who played the daughter of Pym and Van Dyne, Hope van Dyne -- wanted to see Michelle Pfeiffer in the role. Producer Kevin Feige revealed that the studio had a "supercool idea '' for the next Ant - Man film, and "if audiences want it, we 'll find a place to do it. '' Reed also mentioned that there had been talks of making a standalone adventure with Hank Pym as Ant - Man, possibly including the original opening to Ant - Man featuring Jordi Mollà which was cut from the final film. Eric Eisenberg of Cinema Blend opined that a standalone adventure with Pym and the cut sequence would be a good candidate to revive the Marvel One - Shots short film series. By the end of July, David Dastmalchian expressed interest in returning for a sequel as Kurt.
In October 2015, Marvel Studios confirmed the sequel, titled Ant - Man and the Wasp, with a scheduled release date of July 6, 2018. Reed was in negotiations to direct the sequel by the end of the month, and announced his return in November, along with the confirmation of Paul Rudd and Lilly returning as Scott Lang / Ant - Man and Hope van Dyne / Wasp, respectively. Despite being offered to direct sequels in the past, Reed had never done so out of a lack of interest, but was excited to work on Ant - Man and the Wasp because there was "a lot more story to tell with these characters that I have a genuine affection and kind of protective feeling about ''. He was also able to build the sequel "from the ground up '', as he joined the first film late in the process following the departure of original writer and director Edgar Wright, and wanted to explore elements that he had set up in the first film. He first began work on an outline for the sequel, which he thought could be "weird, unique and different '' now that the characters ' origins had been established. On being the first MCU film to have a female character in the title with the Wasp, Reed called it "organic '' and noted the Wasp 's final line in Ant - Man -- ' It 's about damn time ' -- as "very much about her specific character and arc in that movie, but it is absolutely about a larger thing. It 's about damn time: We 're going to have a fully realized, very very complicated hero in the next movie who happens to be a woman. '' Reed would push to ensure the Wasp received equal publicity and merchandise for the film, and wanted to explore the backstory of Janet van Dyne as well. He had "definite ideas '' of who should portray that character. Reed said the alternate title Wasp and the Ant - Man was briefly considered, but was not chosen due to fan expectation given the comics history of the phrase "Ant - Man and the Wasp ''. That month, Adam McKay, one of the writers of Ant - Man, expressed interest in returning to write the film, and Douglas confirmed that he was in talks to return as well.
Reed stated in early December that the film may "call back '' to the heist film genre and tone of Ant - Man, but that Ant - Man and the Wasp would "have an entirely different genre template ''. He hoped to incorporate additional flashback sequences in the film, as well as explore Pym 's various identities from the comics and his psychology. Reed also said he was "excited '' about exploring and discovering the film version of the Ant - Man and Wasp relationship that is "a romantic partnership and a heroic partnership '' in the comics, a "different dynamic than we 've seen in the rest of the (MCU), an actual partnership. '' Additionally, Reed mentioned that pre-production would "probably '' start in October 2016, with filming scheduled for early 2017. Production writers for the first film, Gabriel Ferrari and Andrew Barrer, signed on to write the script along with Rudd, with writing starting "in earnest '' in January 2016. The next month, McKay stated that he would be involved with the film in some capacity. By April, the four writers and Reed had been "holed up in a room... brainstorming the story '', with Reed promising that it would have "stuff in it that you 've never ever seen in a movie before ''. Feige added that they wanted to "stay true to what made (Ant - Man) so unique and different '', and teased the potential of seeing the Giant - Man version of Lang that had been introduced in Captain America: Civil War (2016). Despite being "intimately involved in the writing and the development of the script '', Reed did not take or receive a writing credit on the film.
In June 2016, Reed said that for inspiration from the comics he had been looking at "early Avengers stuff and all the way up to the Nick Spencer stuff now '', and was focusing on iconic images that could be replicated in the film over story beats from the comics. He added that there was "definitely a chance '' for Michael Peña, Tip "T.I. '' Harris, and Dastmalchian to reprise their respective roles as Luis, Dave, and Kurt from the first film. At San Diego Comic - Con 2016, Feige stated that Reed and Rudd were still working on the script, and that filming was now expected to begin in June 2017. Rudd elaborated that they had "turned in a treatment, but it 's so preliminary. We 'll see. We have an idea of what it might look like, but it could change a lot from where we 're at now. '' The next month, Peña was confirmed to be returning as Luis, while filming was revealed to be taking place in Atlanta, Georgia. In early October, an initial script had been completed for the film that was awaiting approval from Marvel. Reed later revealed that early drafts of the script included a cameo appearance from Captain America, appearing during Luis ' flashback sequences as he was recapping Lang 's involvement in the airport battle in Captain America: Civil War. However, the writers chose to remove the appearance in the final script since the events of Civil War were already referenced frequently in the film, and this instance "did n't feel organic to the story. ''
At the start of November 2016, Reed said that the film 's production would transition from "the writing phase '' to "official prep '' that month, beginning with visual development. Reed reiterated his excitement for introducing the Wasp and "really designing her look, the way she moves, the power set, and figuring out, sort of, who Hope van Dyne is as a hero ''. Reed was inspired by the films After Hours (1985), Midnight Run (1988), and What 's Up, Doc? (1972) for the look and feel of Ant - Man and the Wasp. While the first film was more of a heist film, Reed described this as part action film, part romantic comedy, and was inspired by the works of Elmore Leonard where there are "villains, but we also have antagonists, and we have these roadblocks to our heroes getting to where they need to be ''. He also stated his disappointment in the Giant - Man introduction happening in Civil War, rather than an Ant - Man film, but noted that the appearance provided character development opportunities between Lang, Pym, and Van Dyne since Pym is "very clear in the first movie about how he feels about Stark and how he feels about The Avengers and being very protective of this technology that he has '', and so Reed thought Pym would be "pissed '' and Van Dyne would feel betrayed, which was Reed 's "in '' for those characters ' starting dynamics. Reed added that he spends "a lot of time '' talking with the other writers and directors of MCU films, and that he and the writers on this film wished to maintain "our little Ant - Man corner of the universe. Because it 's a whole different vibe tonally ''. Quantum physicist Spyridon Michalakis from the Institute for Quantum Information and Matter at the California Institute of Technology returned to consult on the film, after doing the same on Ant - Man, and explained the science behind getting extremely small to the filmmakers. Michalakis described the subatomic realm as "a place of infinite possibility, an alternative universe where the laws of physics and forces of nature as we know them have n't crystallized '' and suggested it should be represented in the film by "beautiful colors changing constantly to reflect transience. ''
In February 2017, Douglas confirmed that he would reprise his role as Hank Pym in the film. During the Hollywood premiere of Guardians of the Galaxy Vol. 2 in April, Dastmalchian confirmed his return as Kurt, and a month later, Harris confirmed his return as Dave as well. Through that May, Marvel was meeting with several actresses for a "key role '' in the sequel, with Hannah John - Kamen cast in the part at the beginning of June. The following month, Randall Park joined the cast as Jimmy Woo, and Walton Goggins was cast in an undisclosed role. At San Diego Comic - Con 2017, Park 's casting was confirmed; John - Kamen and Goggins ' roles were revealed to be Ghost and Sonny Burch, respectively; and the casting of Pfeiffer as Janet van Dyne and Laurence Fishburne as Bill Foster was announced. Judy Greer was confirmed to be reprising her role as Maggie from the previous film the following week. Louise Frogley served as costume designer on the film after doing so for Marvel 's Spider - Man: Homecoming (2017), and worked with Ivo Coveney to create the superhero suits for the film. Based on designs by Andy Park, the suits are updated for the film from the 1960s - inspired designs used in the first Ant - Man to more modern designs. The Wasp suit included practical wings which were replaced with digital wings for when they are expanded and ready for flight.
The Russo brothers, directors of Avengers: Infinity War and its untitled sequel, which were filming while Ant - Man and the Wasp was preparing to film, were in constant discussion with Reed in order to ensure story elements would line up between the films. Joe Russo added that Ant - Man and the Wasp would have "some (plot) elements that stitch in '' closely with Avengers: Infinity War, more so than some of the other films leading up to the Avengers films. Reed knew Ant - Man and the Wasp would be "a fairly stand - alone movie but... could not ignore the events of Infinity War '', with the biggest connection occurring in the film 's mid-credit scene. Since the events of Ant - Man and the Wasp occur over 48 hours, the timeline in relation to Infinity War was "left purposefully ambiguous '' with Reed noting there had been discussions of placing "little Easter eggs along the way, to start to reveal to the audience where the movie takes place in the timeline, (but t) hat felt not very fun to us and kind of obvious. '' Reed also like the fact the film ends with closure and on a positive note "and then to BANG -- give the audience a gut punch right after the main credits '', with the sequence showing Hank Pym, Janet van Dyne, and Hope van Dyne fading away due to the events of Avengers: Infinity War. The film also has a post-credit scene that shows the ant who doubled for Lang while under house arrest performing a drum solo.
Principal photography began on August 1, 2017, at Pinewood Atlanta Studios in Fayette County, Georgia, under the working title Cherry Blue; Dante Spinotti served as director of photography, shooting on Arri Alexa 65 cameras, with some sequences being shot with a Frazier lens. At the start of filming, Marvel revealed that Bobby Cannavale and Abby Ryder Fortson would also reprise their roles from the first film, respectively as Paxton and Cassie, and that Chris McKenna and Erik Sommers had contributed to the screenplay.
The film 's lab and quantum tunnel set was inspired by The Time Tunnel (1966 -- 67), and was the largest physical set built for an MCU film, which Reed jokingly said was "a little counter-intuitive ''. For the sequence where Janet Van Dyne communicates through Lang, inspiration was taken from All of Me (1984) in which Lily Tomlin 's character is tripped in the body of Steve Martin 's character. There were discussions about having Pfeiffer perform the scene first to give Rudd an idea of how she would act, but the group ultimately decided to let Rudd invent the scene completely himself. For the chamber in Ghost 's lair, the production team under production designer Shepherd Frankel wanted to create an environment that was unique to the MCU, and designed the chamber with fresnel lenses to give it concentric - circle patterns that served a practical purpose for the film 's story as well as differentiating the aesthetics of it from other sets and creating mystery about the character. The chamber is surrounded by "support shapes '' to "create this feeling of desperation and yearning for family and stability ''.
Filming also took place in Metro Atlanta, with filming locations including the Atlanta International School, the Midtown and Buckhead districts of Atlanta, and the Samuel M. Inman Middle School in the city 's Virginia - Highland neighborhood; as well as Emory University and the Atlanta Motor Speedway in Hampton, Georgia. Additional filming took place in San Francisco in September 2017, in Savannah, Georgia in late October, and in Hawaii. Production wrapped on November 19, 2017.
In late November, Lilly said that the characters would try to enter the quantum realm in the film, and their potential success would "open a whole entire new multi-verse to enter into and play around in '' for the MCU. The film includes a clip from Animal House (1978), which Reed was reminded of while discussing the quantum realm science for the film. Reed insisted that the film be shorter than two hours since it would be following the "massive epic '' Infinity War and because it is "an action / comedy, and it did n't want to overstay its welcome ''. The film 's main credits sequence is a "table - top '' version of its action sequences, and was created by Elastic. An alternative idea that had been considered was to create a "fake behind - the - scenes documentary '' that would have made the film look like it was a 1950s Godzilla movie with "people in suits stomping on model cityscapes ''.
Visual effects for the film were created by DNEG, Scanline VFX, Method Studios, Luma Pictures, Lola VFX, Industrial Light & Magic, Cinesite, Rise FX, Rodeo FX, Crafty Apes, Perception NYC, Digital Domain, and The Third Floor.
DNEG worked on over 500 shots, including the effects for Ghost -- which were shared with the other vendors -- the third act chase sequence, and the final Ghost fight as Hank and Janet are returning from the quantum realm. For the "macro-photography '' sequences in the film, DNEG took a different approach from their work in Ant - Man due to issues including trying to get a camera to seem small enough to capture the small actions. Though some of the film was shot with a Frazier lens that provides extra depth of field, DNEG would still need to "re-project the road higher and "raise the floor level '' to simulate a tiny sized camera ". As the third act chase sequence was mainly shot in Atlanta, while being set in San Francisco, DNEG VFX Supervisor Alessandro Ongaro noted it required "extensive environment work '' with background elements in some shots not being salvageable at all. DNEG ultimately created 130 unique environments for the chase. Clear Angle aided DNEG with the Lidar surveying and photography of San Francisco, and were able to get their information for Lombard Street down to the millimeter resolution.
Lola once again worked on the de-aging sequences with Douglas, Pfeiffer, and Fishburne. The flashback sequences featuring a younger Hank Pym were set around the same time as the flashback sequences of Ant - Man, so Lola were able to use a similar process, referencing Douglas ' appearance in Wall Street (1987) and having the actor on set in a different wardrobe and wig. Lola VFX Supervisor Trent Claus felt Pfeiffer 's was less complicated, since "she has aged incredibly well '' and still has big hair and a big smile. Pfeiffer 's work from Ladyhawke (1985) and other films around that time was referenced. For Fishburne, his son served as his younger double, and helped inform Lola how the older Fishburne 's skin would have looked in certain lighting situations. The films Lola looked to for Fishburne 's younger self included Boyz n the Hood (1991) and Deep Cover (1992). Lola also made Fishburne thinner, and all actors had their posture adjusted.
Luma worked on the scenes where Ant - Man and the Wasp infiltrate Ghost 's hideout, where they had to recreate the entire environment with CGI. They also created the first quantum tunnel sequence where Ghost receives her powers, and the flashback missile launch, which had to be replicated exactly from how it appeared in Ant - Man. The new version of the quantum realm, designed by Reed and production VFX supervisor Stephane Ceretti, was created by Method. Method Studios VFX Supervisor Andrew Hellen, explained, "We did a lot of research into macro and cellular level photography, and played with different ways to visualize quantum mechanics. It has a very magical quality, with a scientific edge. We also used glitching effects and macro lensing to ground the footage, and keep it from feeling too terrestrial. '' Method also worked on the sequence when Lang is the size of a preschooler, and created the digital doubles for Ant - Man and Wasp; Method used the same level of detail on the digital double suits regardless of what scale they were.
In June 2017, Reed confirmed that Christophe Beck, who composed the score for Ant - Man, would return for Ant - Man and the Wasp. Beck reprised his main theme from Ant - Man, and also wrote a new one for Wasp that he wanted to be "high energy '' and show that she is more certain of her abilities than Lang. When choosing between these themes for specific scenes throughout the film, Beck tried to choose the Wasp theme more often so there would be "enough newness in the score to feel like it 's going new places, and is n't just some retread. '' Hollywood Records and Marvel Music released the soundtrack album digitally on July 6, 2018.
Concept art and "pre-CGI video '' for the film was shown at the 2017 San Diego Comic - Con. In January 2018, Hyundai Motor America announced that the 2019 Hyundai Veloster would play a significant role in the film, with other Hyundai vehicles also appearing. The first trailer for the film was released on January 30, 2018 on Good Morning America, and used the guitar riff from Adam and the Ants ' "Ants Invasion ''. David Betancourt of The Washington Post called the release, the day after the widely praised Black Panther premiere, a "smart move ''; with Black Panther and Avengers: Infinity War also releasing in 2018, "it can be easy (to) forget that hey, there is an Ant - Man sequel coming this year... So Marvel Studios giving us a quick reminder with this trailer release is logical ''. Tracy Brown, writing for the Los Angeles Times, praised how the trailer prominently featured Lilly 's Van Dyne "(showing) off how she was always meant to be a superhero ''. A second trailer was released on May 1, 2018, following a teaser video featuring the Infinity War cast asking "where were Ant - Man and The Wasp? '' in that film. Graeme McMillan of The Hollywood Reporter felt the trailer made the film feel "very much like an intentional antidote for, or at least alternative to, the grimness of Infinity War 's downbeat ending '', calling it "a smart move '' since it could be considered "a palate cleanser and proof that Marvel has more to offer... before audiences dive back into the core narrative with next year 's Captain Marvel. '' In June 2018, Feige presented several scenes from the film at CineEurope. Promotional partners for the film included Dell, Synchrony Financial, and Sprint. In total, Disney spent about $154 million worldwide on promoting the film.
Ant - Man and the Wasp had its world premiere at the El Capitan Theatre in Hollywood on June 25, 2018, and was released in the United States on July 6, 2018, where it opened in 4,206 theaters, of which 3,000 were in 3D, 403 were in IMAX, over 660 were in premium large format, and over 220 were in D - Box and 4D.
The film was scheduled to be released in the United Kingdom on June 29, 2018, but was rescheduled in November 2017 to August 3, 2018, in order to avoid competition with the 2018 FIFA World Cup. Charles Gant of The Guardian and Screen International noted, "The worry for film distributors is that audiences will be caught up in the tournament. So it 's easier to play safe and not date your film at this time, especially during the group stage, when all the qualifying nations are competing. '' Tom Butler of Yahoo! Movies UK added that, unlike the first film, which was one of the lowest - grossing MCU films in the UK, anticipation levels for the film "are at an all - time high following the events of Infinity War '' and "UK audiences will probably have found out what happens in the film well before it opens in UK cinemas, and this could have a negative impact on its box office potential. '' Butler and Huw Fullerton of Radio Times both opined the delay could also be in part because of Disney also delaying the United Kingdom release of Incredibles 2 to July 13, 2018 (a month after its United States release), and not wanting to compete with itself with the two films. This in turn led fans in the country to start a Change.org petition to have Disney move the release date up several weeks, similarly to how Avengers: Infinity War 's United States release was moved up a week the previous May.
Ant - Man and the Wasp was released on digital download by Walt Disney Studios Home Entertainment on October 2, 2018, and on Ultra HD Blu - ray, Blu - ray, and DVD on October 16. The digital and Blu - ray releases include behind - the - scenes featurettes, an introduction from Reed, deleted scenes, and blooper reels. The digital release also features a look at the role concept art plays in bringing the various MCU films to life and a faux commercial for Online Close - Up Magic University.
As of October 28, 2018, Ant - Man and the Wasp has grossed $216.6 million in the United States and Canada, and $405.8 million in other territories, for a worldwide total of $622.5 million. Following its opening, Deadline Hollywood estimated the film would turn a net profit of around $100 million. It is the seventh - highest - grossing film of 2018.
Ant - Man and the Wasp earned $33.8 million on its opening day in the United States and Canada (including $11.5 million from Thursday night previews), for an opening weekend total of $75.8 million; this was a 33 % improvement over the first film 's debut of $57.2 million. Its opening included $6 million from IMAX screens. In its second weekend, the film earned $28.8 million, coming in second behind Hotel Transylvania 3: Summer Vacation, and in its third weekend grossed $16.1 million, coming in fourth. The film placed sixth in its fourth weekend, seventh in its fifth weekend, and tenth in its sixth weekend. BoxOffice Magazine projected a final domestic gross of $225 million.
Outside the United States and Canada, the film earned $85 million from 41 markets, where it opened number one in all except New Zealand. Its South Korea opening was $20.9 million (which included previews). The $15.5 million opening from the market without previews was the second - best opening of 2018 behind Avengers: Infinity War. In its second weekend, playing in 44 markets, it remained number one in Australia, Hong Kong, South Korea, and Singapore. The film opened in France in its third weekend, earning $4.1 million, and opened in Germany in its fourth, where it was number one and earned $2.8 million, including previews. The next weekend saw Ant - Man and the Wasp open at number one (when including previews) in the United Kingdom, where it earned $6.5 million, and two weeks later, Italy opened number one with $2.7 million (including previews). In its eighth weekend, the film 's $68 million opening in China was the fourth - best MCU opening in China and the third - highest Hollywood film opening of 2018. $7.2 million was from IMAX, which was the best August IMAX opening in China. The film opened in Japan the next weekend, earning $3.7 million, which was the top Western film for the weekend. As of September 9, 2018, the film 's largest markets were China ($117.5 million), South Korea ($42.4 million), and the United Kingdom ($21.5 million).
The review aggregation website Rotten Tomatoes reported an approval rating of 88 % based on 342 reviews, and an average rating of 7 / 10. The website 's critical consensus reads, "A lighter, brighter superhero movie powered by the effortless charisma of Paul Rudd and Evangeline Lilly, Ant - Man and The Wasp offers a much - needed MCU palate cleanser. '' Metacritic, which uses a weighted average, assigned the film a normalized score of 70 out of 100, based on 56 critics, indicating "generally favorable reviews ''. Audiences polled by CinemaScore gave the film an average grade of "A -- '' on an A+ to F scale, down from the "A '' earned by the first film.
Peter Travers, writing for Rolling Stone, gave the film 3 out of 4 stars and praised Rudd and Lilly, saying, "The secret of Ant - Man and the Wasp is that it works best when it does n't try so hard, when it lets charm trump excess and proves that less can be more even in the Marvel universe. '' Richard Roeper of the Chicago Sun - Times praised the lightweight tone as a treat and a breath following the "dramatically heavy conclusion '' of Avengers: Infinity War. He also praised the cast, especially Rudd and Fortson, as well as the visual effects and inventive use of shrinking and growing in the action scenes. Manohla Dargis at The New York Times felt the film 's "fast, bright and breezy '' tone was a vast improvement over the first film, praising Reed 's direction. She also praised Rudd, felt Lilly found "her groove '' in the film, and wrote that the supporting cast all had "scene - steal (ing) '' sequences. Simon Abrams of RogerEbert.com said the film was "good enough '', a "messy, but satisfying '' sequel that he felt managed to juggle its many subplots while giving Rudd 's Lang some decent character development.
Variety 's Owen Gleiberman called the film "faster, funnier, and more cunningly confident than the original, '' and felt Reed was able to give the film enough personality to overcome its two - hour runtime and effects - heavy climax. He did caution that this was "not quite the same thing as humanity. But it 's enough to qualify as the miniature version. '' At The Washington Post, Ann Hornaday called the film "instantly forgettable '' and criticized its plot, which she felt included some "filler '' subplots, but found the film to be "no less enjoyable '' because of this. She particularly praised Rudd along with the action and effects. Writing for the Boston Globe, Ty Burr called the film the perfect "summer air - conditioning movie '', finding it fun, funny, superficial and an improvement over the first. He also wrote that the film had too many subplots and not enough of Pfeiffer, but was pleased with the lack of connection that the overall story had to the rest of the MCU, and with the focus on "pop trash '' comedy. Stephanie Zachareck, writing for Time, said it was "hard to actively dislike '' the film, which she thought had reasonably fun action and stand - out moments between Rudd and Fortson; but she was not as impressed with the larger, effects - heavy action sequences and felt the focus on Lilly as a better hero than Rudd was "just checking off boxes in the name of gender equality. ''
Ahead of the film 's release, Reed noted that he and Marvel were "hopeful '' about a third film, having discussed potential story points. Michael Douglas also expressed interest in playing a younger version of his character Hank Pym in a prequel, something which Reed already teased back in 2015.
|
the hapsburg-valois wars ended with the signing of which treaty in 1559 | Italian war of 1551 -- 59 - wikipedia
Kingdom of France
Republic of Siena
The Italian War of 1551 (1551 -- 1559), sometimes known as the Habsburg -- Valois War and the Last Italian War, began when Henry II of France, who had succeeded Francis I to the throne, declared war against Holy Roman Emperor Charles V with the intent of recapturing Italy and ensuring French, rather than Habsburg, domination of European affairs. The war was the last of a series of wars between the same parties since 1521. Historians have emphasized the importance of gun powder technology, new styles of fortification to resist cannon fire, and the increased professionalization of the soldiers.
Henry II sealed a treaty with Suleiman the Magnificent in order to cooperate against the Habsburgs in the Mediterranean. This was triggered by the conquest of Mahdiya by the Genoese Admiral Andrea Doria on 8 September 1550, for the account of Charles V. The alliance allowed Henry II to push for French conquests towards the Rhine, while a Franco - Ottoman fleet defended southern France.
The 1551 Ottoman Siege of Tripoli was the first step of the all - out Italian War of 1551 -- 59 in the European theater, and in the Mediterranean the French galleys of Marseille were ordered to join the Ottoman fleet. In 1552, when Henry II attacked Charles V, the Ottomans sent 100 galleys to the Western Mediterranean, which were accompanied by three French galleys under Gabriel de Luetz d'Aramon in their raids along the coast of Calabria in Southern Italy, capturing the city of Reggio. In the Battle of Ponza in front of the island of Ponza, the fleet met with 40 galleys of Andrea Doria, and managed to vanquish the Genoese and capture seven galleys. This alliance would also lead to the combined Invasion of Corsica in 1553. The Ottomans continued harassing the Habsburg with various operations in the Mediterranean, such as the Ottoman invasion of the Balearic islands in 1558, following a request by Henry II.
On the Continental front, Henry II allied with German Protestant princes at the Treaty of Chambord in 1552. An early offensive into Lorraine was successful, with Henry capturing the three episcopal cities of Metz, Toul, and Verdun, and securing them by defeating the invading Habsburg army at the Battle of Renty in 1554. However, the French invasion of Tuscany in 1553, in support of Siena attacked by an imperial ‐ Tuscany army, was defeated at the Battle of Marciano by Gian Giacomo Medici in 1554. Siena fell in 1555 and eventually became part of the Grand Duchy of Tuscany founded by Cosimo I de ' Medici.
The Treaty of Vaucelles was signed on 5 February 1556 between Philip II of Spain and Henry II of France. Based on the terms of the treaty, the territory of the Franche - Comté was relinquished to Philip. However, the treaty was broken shortly afterwards.
After Charles ' abdication in 1556 split the Habsburg empire between Philip II of Spain and Ferdinand I, the focus of the war shifted to Flanders, where Philip, in conjunction with Emmanuel Philibert of Savoy, defeated the French at St. Quentin. England 's entry into the war later that year led to the French capture of Calais, and French armies plundered Spanish possessions in the Low Countries. Nonetheless, Henry was forced to accept a peace agreement in which he renounced any further claims to Italy.
The wars ended for other reasons, including the Double Default of 1557, when the Spanish Empire, followed quickly by the French, defaulted on its debts. In addition, Henry had to confront a growing Protestant movement at home, which he hoped to crush.
Oman (1937) argues that the inconclusive campaigns which generally lack a decisive engagement were largely due to an effective leadership and lack of offensive spirit. He notes that mercenary troops were used too often and proved unreliable. Hale emphasizes the defensive strength of fortifications newly designed at angles to dissipate cannon fire. Cavalry, which had traditionally used shock tactics to overawe the infantry, largely abandoned them and relied on pistol attacks by successive ranks of attackers. Hale notes the use of old - fashioned mass formations, which he attributes to lingering conservatism. Overall, Hale emphasizes new levels of tactical proficiency.
In 1552 Charles V had borrowed over 4 million ducats, with the Metz campaign alone costing 2.5 million ducats. Shipments of treasure from the Indies totalled over 2 million ducats between 1552 - 53. By 1554, the cash deficit for the year was calculated to be over 4.3 million ducats, even after all tax receipts for the six ensuing years had been pledged and the proceeds spent in advance. Credit at this point began costing the crown 43 percent interest (largely financed by the Fugger and Welser banking families). By 1557 the crown was refusing payment from the Indies since even this was required for payment of the war effort (used in the offensive and Spanish victory at the battle of St. Quentin in August 1557).
French finances during the war were mainly financed by the increase in the taille tax, as well as indirect taxes like the gabelle and customs fees. The French monarchy also resorted to heavy borrowings during the war from financiers at rates of 10 - 16 percent interest. The taille was estimated in collection for 1551 at around 6 million livres.
During the 1550s, Spain had an estimated military manpower of around 150,000 soldiers, whereas France had an estimated manpower of 50,000.
The Peace of Cateau - Cambrésis was signed between Henry II of France and Philip II of Spain on 3 April 1559, at Le Cateau - Cambrésis, around twenty kilometers south - east of Cambrai. Under its terms, France restored Piedmont and Savoy to the Duke of Savoy, and Corsica to the Republic of Genoa, but retained Saluzzo, Calais and the Three Bishoprics: Metz, Toul, and Verdun. Spain retained Franche - Comté, but, more importantly, the treaty confirmed its direct control of Milan, Naples, Sicily, Sardinia, and the State of Presidi, and indirectly (through dominance of the rulers of Tuscany, Genoa, and other minor states) of northern Italy. The Pope was also their natural ally. The only truly independent entities on Italian soil were Savoy and the Republic of Venice. Spanish control of Italy lasted until the early eighteenth century. Ultimately, the treaty ended the 60 year, Habsburg - Valois wars.
Emmanuel Philibert, Duke of Savoy married Margaret of France, Duchess of Berry, the sister of Henry II of France. Philip II of Spain married Elisabeth, the daughter of Henry II of France. Henry died during a tournament when a sliver from the shattered lance of Gabriel Montgomery, captain of the Scottish Guard at the French Court, pierced his eye and entered his brain.
The French had achieved mixed results: their situation had improved significantly when compared to the late 1520s, they had made some territorial gains and the treaty was considered an agreement between two equal powers. However, they had failed to change the balance of power in Italy, or break the Habsburg encirclement. Most importantly, their good position would soon be jeopardized by the French Wars of Religion. For the Habsburgs as a whole, the result was mixed too, as the war had weakened their position in the Holy Roman Empire and led to the separation of Charles ' realms. However, for the Spanish Empire, the results were much better, as it was left as the sole dominant power in Italy and had successfully withstood the French effort. England fared poorly during the war, and the loss of its last stronghold on the continent damaged its reputation.
The Peace of Cateau - Cambresis (1559). Henry II of France and Philip II of Spain were in reality absent, and the peace was signed by their ambassadors.
The fatal tournament between Henry II and Montgomery (Lord of "Lorges '').
|
what are the names of the characters in sailor moon | List of Sailor Moon characters - wikipedia
The Sailor Moon manga series features an extensive cast of characters created by Naoko Takeuchi. The series takes place in Tokyo, Japan, where the Sailor Soldiers (セーラー 戦士, Sērā Senshi), a group of ten magical girls, are formed to combat an assortment of antagonists attempting to take over the Earth, the Solar System, and the Milky Way galaxy. Each Soldier undergoes a transformation which grants her a uniform in her own theme colors and her own unique elemental power. The ten Sailor Soldiers are named after the planets of the Solar System, with the exception of Earth but inclusion of its moon. While many of the characters are humans with superhuman strength and magical abilities, the cast also includes anthropomorphic animals and extraterrestrial lifeforms.
The series follows the adventures of the titular protagonist, Sailor Moon, her lover Tuxedo Mask, and her guardians: Sailors Mercury, Mars, Jupiter, and Venus. They are later joined by Chibiusa (Sailor Moon and Tuxedo Mask 's daughter from the future) and four more guardians: Sailors Uranus, Neptune, Pluto, and Saturn. The series ' antagonists include the Dark Kingdom, the Black Moon Clan, the Death Busters, the Dead Moon Circus, and Shadow Galactica.
Takeuchi 's initial concept was a story called Codename: Sailor V, in which Sailor V discovers her magical powers and protects the people of Earth. After the Codename: Sailor V manga was proposed for an anime adaptation, Takeuchi changed her concept to include ten superheroines who defend the galaxy. The manga 's anime and live - action adaptations feature some original characters created by the production staff and not by Takeuchi.
Naoko Takeuchi initially wrote Codename: Sailor V, a one - shot manga which focused on Sailor Venus. When Sailor V was proposed for an anime adaptation by Toei Animation, Takeuchi changed the concept to include Sailor Venus as a part of a "sentai '' (team of five) and created the characters of Sailors Moon, Mercury, Mars, and Jupiter.
The name "Sailor Senshi '', or "Sailor Soldier '', comes from sailor fuku, a type of Japanese school uniform that the main characters ' fighting uniforms are based on, and the Japanese word senshi, which can mean "soldier '', "warrior '', "guardian '', or "fighter ''. Takeuchi created the term by fusing English and Japanese elements. The DIC Entertainment / Cloverway English adaptation of the anime changed it to "Sailor Scout '' for most of its run. According to Takeuchi, only females can be Sailor Soldiers. In the anime 's fifth season, the Sailor Starlights are depicted as men transforming into women when changing from their normal forms into Sailor Soldiers (rather than just being women disguising as men as they appear in the manga), which strongly displeased Takeuchi as she felt this undermined her rule that only girls could be Sailor Soldiers.
Takeuchi desired to create a series about girls in outer space; her editor, Fumio Osano, suggested that Takeuchi add the "sailor suit '' motif in the uniform worn by the Sailor Soldiers. Originally, each of the Soldiers were intended to have their own unique outfit; however, it was later determined that they would wear uniforms based on a single theme, and Sailor Moon 's costume concept was the closest to that which would eventually be used for all the girls. While the Soldiers first uniforms had slight differences, Takeuchi settled on a more unified appearance in later stages of character design. Within the Sailor Soldiers, only the outfit worn by Sailor Venus during her time as Sailor V varies significantly from the others; however, Sailor Moon, whatever form she takes, always has a more elaborate costume than any of the others. She also gains individual power - ups more frequently than any other character. Sailor Soldiers originating from outside the Solar System have different and varying outfits; however, one single feature -- the sailor collar -- connects them all.
Most of the antagonists in the series have names that are related to minerals and gemstones, including Queen Beryl and the Four Kings of Heaven, the Black Moon Clan, Kaolinite and the Witches 5, and most of the members of the Dead Moon Circus. Members of the Amazoness Quartet are named after the first four asteroids to be discovered. The Sailor Animamates have the prefix "Sailor '' (despite not being true Sailor Soldiers in the manga), followed by the name of a metal and the name of an animal.
Usagi Tsukino (月野 うさぎ, Tsukino Usagi, called Serena Tsukino in the original English dub) is the main protagonist of the series and leader of the Sailor Soldiers. Usagi is a careless fourteen - year - old girl with an enormous capacity for love, compassion, and understanding. Usagi transforms into the heroine called Sailor Moon, Soldier of Love and Justice. At the beginning of the series, she is portrayed as an immature crybaby who resents fighting evil and wants nothing more than to be a normal girl. As she progresses, however, she embraces the chance to use her power to protect those she cares about.
Mamoru Chiba (地場 衛, Chiba Mamoru, called Darien Shields in the original English dub) is a student somewhat older than Usagi. As a young child he was in a car accident that killed his parents and erased his memories. He possesses a special psychic rapport with Usagi and can sense when she is in danger. This inspires him to take on the guise of Tuxedo Mask and fight alongside the Sailor Soldiers when needed. After an initially confrontational relationship, he and Usagi remember their past lives together and fall in love again.
Ami Mizuno (水野 亜美, Mizuno Ami, called Amy Anderson in the original English dub) is a quiet but intelligent fourteen - year - old bookworm in Usagi 's class with a rumored IQ of 300. She can transform into Sailor Mercury, Soldier of Water and Wisdom. Ami 's shy exterior masks a passion for knowledge and taking care of the people around her. She hopes to become a doctor one day, like her mother, and tends to be the practical one in the group. She is secretly a fan of pop culture and romance novels, and becomes embarrassed whenever this is pointed out. Ami also uses her handheld computer, which is capable of scanning and detecting virtually anything she needs.
Rei Hino (火野 レイ, Hino Rei, called Raye Hino in the original English dub) is an elegant fourteen - year - old miko (English: shrine maiden). Because of her work as a Shinto priestess, Rei has limited precognition and can dispel or nullify evil using special ofuda scrolls, even in her civilian form. She transforms into Sailor Mars, Soldier of Fire and Passion. She is very serious and focused, and easily gets annoyed by Usagi 's laziness, although she cares about her very much. In the anime adaptation, Rei is portrayed as boy - crazy and short - tempered throughout, while in the manga and live - action series she is depicted as uninterested in romance and more self - controlled. She attends a private Catholic school separate from the other girls.
Makoto Kino (木野 まこと, Kino Makoto, called Lita Kino in the original English dub) is a fourteen - year - old girl who is a student in Usagi Tsukino 's class and has immense physical strength and was rumoured to have been expelled from her previous school for fighting. Unusually tall and strong for a Japanese schoolgirl, she transforms into Sailor Jupiter, Soldier of Thunder and Courage. Both of Makoto 's parents died in a plane crash years ago, so she lives alone and takes care of herself. In the original anime, she confesses to Seijuro that she has a younger sister who no longer wishes to speak to her. She cultivates her physical strength and domestic interests, including housekeeping, cooking, and gardening. Makoto is also good at hand - to - hand combat. Her dream is to marry a young, handsome man and to own a flower - and - cake shop.
Minako Aino (愛野 美奈子, Aino Minako, called Mina Aino in the original English dub) is a fourteen - year - old perky dreamer. Minako first appears as the main protagonist of Codename: Sailor V. She has a companion cat called Artemis who works alongside Luna in guiding the Sailor Soldiers. Minako transforms into Sailor Venus, Soldier of Love and Beauty, and leads Sailor Moon 's four inner soldiers, while acting as Sailor Moon 's bodyguard and decoy because of their near identical looks. She dreams of becoming a famous singer and idol, and attends auditions whenever she can. In contrast, in the live - action series, she is a successful J - Pop singer (of whom Usagi, Ami, and Makoto are fans) and has poor health due to anemia, choosing to separate herself from the other Guardians as a result.
Chibiusa (ちびうさ, Chibiusa, called Rini in the original English dub) is the future daughter of Neo-Queen Serenity and King Endymion in the 30th century. She later trains with Sailor Moon to become a Sailor Soldier in her own right, and learns to transform into Sailor Chibi Moon (or "Sailor Mini Moon '' in the English series). At times she has an adversarial relationship with her mother in the 20th century, as she is more mature than Usagi, but as the series progresses they develop a deep bond. Chibiusa wants to grow up to become like her mother.
Setsuna Meioh (冥王 せつな, Meiō Setsuna, called Trista Meioh in the original English dub) is a mysterious woman who appears first as Sailor Pluto, Soldier of Spacetime and Change. She has the duty of guarding the Space - Time Door from unauthorized travelers. Only later does she appear on Earth, living as a college student. She has a distant personality and can be very stern, but can also be quite friendly and helps the Sailor Soldiers when she can. After her long vigil guarding the Space - Time Door she carries a deep sense of loneliness, although she is close friends with Chibiusa. Sailor Pluto 's talisman is her Garnet Rod, which aids her power to freeze time and attacks.
Haruka Tenoh (天王 はるか, Ten'ō Haruka, called Amara Tenoh in the original English dub) is a good - natured masculine - acting girl who is a year older than most of the other Sailor Soldiers. She is able to transform into Sailor Uranus, Soldier of Sky and Flight. Before becoming a Sailor Soldier, she dreamt of becoming a racer, and she has excellent driving skills. She tends to dress and, in the anime, speak like a man. When fighting the enemy she distrusts outside help and prefers to work solely with her girlfriend, Sailor Neptune, and later Sailor Pluto and Saturn. Sailor Uranus 's talisman, known as the Space Sword, aids her fighting and attacks.
Michiru Kaioh (海王 みちる, Kaiō Michiru, called Michelle Kaioh in the original English dub) is an elegant and talented violinist and painter with family money of an age with her partner and lover, Haruka Tenoh. She is able to transform into Sailor Neptune, Soldier of Ocean and Embrace. She worked alone for some time before finding her partner, Sailor Uranus. Neptune has ultimately given up her own dreams for the life of a Soldier. She is fully devoted to this duty and willing to make any sacrifice for it. Sailor Neptune 's talisman is her Deep Aqua Mirror, which aids her intuition and reveals cloaked evil.
Hotaru Tomoe (土 萠 ほたる, Tomoe Hotaru) is a sweet and lonely young girl. A terrible laboratory accident in her youth significantly compromised her health In the manga, this accident destroyed a large portion of her body which was later rebuilt with electronic components by her father. After overcoming the darkness that has surrounded her family, Hotaru is able to become Sailor Saturn, Soldier of Silence, Destruction, and Rebirth. She is often pensive, and as a human has the inexplicable power to heal others. Sailor Saturn 's weapon is her Silence Glaive, which gives her the power to generate barriers and destroy a planet. However, when she uses that power, she kills herself but is reborn afterwards by Sailor Moon.
The Dark Kingdom (ダーク ・ キングダム, Dāku Kingudamu, called Negaverse in the original English dub) are the main antagonists in the first arc of the manga and anime, and the entirety of the live - action series. Serving under its ruler, Queen Beryl, members of the Dark Kingdom attempt to gather human energy and find the Silver Crystal in order to reawaken Queen Metaria, the evil entity responsible for the destruction of the Silver Millennium.
The Black Moon Clan (ブラック ・ ムーン 一族, Burakku Mūn Ichizoku, called Negamoon Family in the original English dub) are the main antagonists in the Black Moon arc of the manga and in the majority of Sailor Moon R. Members of the Black Moon Clan come from Planet Nemesis, a fictional tenth planet of the Solar System, and have black upside - down crescents on their foreheads. They are led by Prince Demand, who has been manipulated so that he and the Black Moon Clan members gather power for the Wiseman.
The Death Busters (デス ・ バスターズ, Desu Basutāzu, called Heart Snatchers in the original English dub) are the main antagonists in the Infinity arc of the manga and in Sailor Moon S. Led by Professor Tomoe, the main goal of the Death Busters is the resurrection of Mistress 9, who in turn would bring the alien creature Pharaoh 90 to destroy Earth in an event known as "Silence ''.
The Dead Moon Circus (デッド ・ ムーン ・ サーカス, Deddo Mūn Sākasu, called Dark Moon Circus in the original English dub) are the main antagonists in the Dream arc of the manga and in Sailor Moon Super S. Led by Zirconia, members of the Dead Moon Circus are searching for the Golden Crystal, which will allow their Queen Nehelenia to break free of her entrapment within a mirror and take over the Earth.
Shadow Galactica (シャドウ ・ ギャラクティカ, Shadō Gyarakutika) are the main antagonists in the Stars arc of the manga and most of Sailor Stars. Shadow Galactica is an organization of corrupted Sailor Soldiers led by Sailor Galaxia, who devote themselves to stealing Star Seeds, the essence of sentient life, from inhabitants of the Milky Way. Their ultimate goal is to reorganize the universe as desired by Chaos, the ultimate antagonist of the series.
The Hell Tree aliens are a minor group of antagonists composed of Ail, Ann, and the eponymous Hell Tree, who appear only in the first thirteen episodes of Sailor Moon R. Ail and Ann have wandered around space alone for many years before reaching Earth, where they finally find energy to collect for the Hell Tree so that they can revive it and, in turn, it can give them energy to survive. Unlike other antagonists of the series, their mission was primarily that of survival, not conquest or destruction. In some English adaptations of the anime, their name is changed to "Doom Tree aliens ''.
Ail (エイル, Eiru) and Ann (アン, An) are two humanoid aliens who pose respectively as Seijūrō Ginga (銀河 星 十郎, Ginga Seijūrō, called Alan Granger in the original English dub) and Natsumi Ginga (銀河 夏美, Ginga Natsumi, called Ann Granger in the original English dub), two transfer students that live in the Jūban Odyssey apartments. While trying to blend in with the humans at Usagi 's school, Ail assumes the role of brother to Ann. He develops a crush on Usagi, and constantly tries to win her over, much to the dismay of Ann. He constantly denies these feelings to Ann, knowing her to have fits of jealous rage. On the other hand, Ann develops a crush on Mamoru, and constantly tries to win him over, much to the dismay of Ail and Usagi. Ail and Ann are the only two of their kind, and could be considered as both siblings and romantic partners since they were both born from the Hell Tree.
Hikaru Midorikawa voiced Ail in the original series, and Yumi Tōma voiced Ann. In the DIC English version, Alan is voiced by Vince Corazza and Ann by Sabrina Grdevich. In the Viz Media English version, Ail is voiced by Brian Beacock and Ann by Johanna Luis.
The Hell Tree (魔界 樹, Makai Ju, called the Doom Tree in the original English dub) is an alien tree that nourishes Ail and Ann, but it becomes weak and requires energy to stay alive. For some time they supply it with human energy to revive but eventually this stops working. In the final episode of the story arc, the aliens try to give Usagi to it as an offering, because Usagi 's energy caused a sapling to grow on the Tree. The Tree becomes angered and starts to injure those around it, killing Ann in the process. It stops to tell its story to Sailor Moon: the Tree once lived alone on a faraway planet on an island in a vast ocean for countless years until it began to create life (the English dub changed the story to suggest it was once called "The Tree of Life ''). It gave energy to its children, but eventually they became greedy and began to fight each other until the planet was destroyed and there were only two small children left, Ail and Ann. It was now weak and needed the energy of love to survive. Sailor Moon uses her power to purify the Tree, and it disappears. Ann is resurrected and when she reunites with Ail, a small sapling appears before them; the Tree has been reborn and they are given a chance to start over, and they leave Earth to live a better life with the Tree.
The Hell Tree was voiced in Japanese by Taeko Nakanishi. In the DIC English adaptation, the Doom Tree was voiced by Elizabeth Hanna. In the Viz Media English adaptation, she is voiced by Erin Fitzgerald.
The Cardians (カー ディアン, Kādian) are monsters of the day used by Ail and Ann to obtain energy to revive the Hell Tree. The Cardians are kept in cards until they are summoned by Ail. To summon them, Ail would hold up several cards, and Ann would pick one. The card would then rise into the air and Ail would play a tune on his flute which causes the Cardian to come alive. When a Cardian is destroyed, it changes back into its card form and the picture of the Cardian on the card turns black after a few seconds.
The series includes three different cat characters who act as advisors to their respective owners. Each has the power of speech, and bears a crescent moon symbol on his or her forehead. The two older cats, Luna and Artemis, lived in the Moon Kingdom millennia before the main plot and acted as advisors to Queen Serenity; the third, Diana, is much younger and was born on Earth. The cats serve as mentors and confidantes, and a source of information and new tools and special items. They are shown to have additional physical forms, a deeper backstory, and an unrequited love or two. Although Luna takes the largest role of the three, Artemis was the first of the cats to appear; he figures prominently in Codename: Sailor V, the manga series which preceded Sailor Moon.
In Act 46 of the manga, the three are attacked by Sailor Tin Nyanko, a false Soldier from their home planet Mau (named after the Chinese word "貓 '', meaning "cat ''). Artemis terms it a peace - loving world, but Tin Nyanko informs him that its people were wiped out by Sailor Galaxia after he and Luna left it. Tin Nyanko blasts all three of them on their crescent moon symbols, and they turn into ordinary cats, unable to speak. Later, as they care for the badly injured cats, Princess Kakyuu tells Usagi that the three of them have powerful Star Seeds, as brilliant as Sailor Crystals. In Act 48, they are brought to the River Lethe and killed by Sailor Lethe. They are reincarnated at the end of the series along with everyone else.
In the live - action series, Luna and Artemis are portrayed as stuffed toys rather than real cats. Usually they are represented by a puppet, though CGI effects are used for complicated scenes.
Writer Mary Grigsby considers the cat characters to blend pre-modern ideas about feminine mystery and modern ideas such as the lucky cat.
Luna (ルナ, Runa) is a black cat who is a devoted servant to Princess Serenity and advisor to her mother, Queen Serenity. When the kingdom falls, she and Artemis are put into a long sleep and sent to Earth to look after the Sailor Soldiers, who are reborn there. Part of Luna 's memory are suppressed so that she must find the Sailor Soldiers. She first encounters Usagi Tsukino and teaches her to become Sailor Moon, unaware that she is actually the reincarnated Princess Serenity. Luna also provides the Soldiers with many of their special items. Over the course of the series, Luna develops a close bond with Usagi, though it is initially on uneasy terms, as Luna often upsets Usagi by giving her unsolicited advice. She also becomes good friends with Ami Mizuno. She and Artemis have an implied romantic relationship, which is confirmed when they meet Diana, who is their daughter from the future. In Sailor Moon Sailor Stars, Luna also develops a crush on Kou Yaten, one of the Three Lights.
In "The Lover of Princess Kaguya '', a side - story of the manga, she falls in love with a human named Kakeru. This story was adapted in Sailor Moon S: The Movie, and features Luna 's first transformation into a human. She gets a cold and tries to find her way home despite Artemis ' plea to go with her. She ends up lying in the street until Kakeru saves her from becoming roadkill.
In Act 27 of the live - action series, Luna gains the ability to turn into a young human girl, going by the name Luna Tsukino while able to become a Sailor Soldier known as "Sailor Luna ''. She is shown living as a human with Usagi 's family, with whom she gets along quite well, but still takes on her cat form when necessary. Her personality as a human girl is identical to her normal self and is easily overwhelmed by her feline nature, but she is also shown to have taken on some of the personality traits of Usagi and her mother, such as acting in the same melodramatic manner when waking up in the morning. Takeuchi designed the character of Sailor Luna. Luna 's human form is portrayed by Rina Koike, who thought that she was going to play Chibiusa until she went in for a costume fitting.
Luna is voiced by Keiko Han in the anime television series and the live - action series, and by Ryō Hirohashi in all media following Sailor Moon Crystal. In the DIC / Cloverway English adaptation, she is voiced by Jill Frappier, who portrayed the character with an English accent, described as "fairly old, not to mention cranky and British ''. Her role in the series has been compared to Rupert Giles ' in Buffy the Vampire Slayer. In the Viz Media English adaptation, her voice is supplied by Michelle Ruff.
Artemis (アルテミス, Arutemisu) is the white cat companion to Minako Aino. Artemis trains her to become Sailor V, and remains by her side when she takes on her proper role as Sailor Venus. He first guides Usagi Tsukino through the Sailor V video game at the Crown Game Center arcade without revealing his true identity. When a technical problem reveals him, Luna is greatly annoyed to learn that he has been the one guiding her all along. Later, he fills in the details of her true mission. In the Sailor V manga and the live - action series, Artemis gives special items to the Soldiers, although unlike Luna he does not seem to produce them himself. He does not seem to mind the fact that he is named after a female goddess, even when teased about it by Minako. Artemis is more easygoing than Luna, and has a "big brother '' relationship with Minako, although an attraction to her is sometimes implied. He also cares very deeply about Luna, often comforting her when she is distressed and stating his admiration of her. In addition, he is a good father to Diana as evidenced by her affection for him.
In the original Japanese series, Artemis is voiced by Yasuhiro Takato in Sailor Moon and by Yōhei Ōbayashi in Crystal. In the live - action series, he is voiced by Kappei Yamaguchi. He appears in the first Sailor Moon musical, played by a cat - suited Keiji Himeno. In the DIC / Cloverway English adaptation, he is voiced by Ron Rubin. In the Viz Media English adaptation, his voice is supplied by Johnny Yong Bosch.
Diana (ダイアナ, Daiana) is the future daughter of Luna and Artemis. She first appears when the Sailor Soldiers travel to the 30th century in the Black Moon arc. After defeating Death Phantom, the Sailor Soldiers return to the 20th century and Diana joins them. In the anime, she first appears in Sailor Moon SuperS, calling Artemis her father, to Luna 's initial dismay. Only later it is revealed that Diana has come from the future and that her mother is Luna. Just as Luna and Artemis guide Usagi and Minako, Diana acts as a guardian to Chibiusa. She is very curious, eager to help, and deeply polite, always addressing Usagi and Mamoru with the Japanese honorific "- sama '' and calling Chibiusa by her formal title, Small Lady. She is able to help the Sailor Soldiers on occasion, despite her youth, and often because of the knowledge she had gained in the future.
Diana is voiced by Kumiko Nishihara in the first series, and by Shoko Nakagawa in Crystal. In the Cloverway English adaptation, she is voiced by Loretta Jafelice in the series, and by Naomi Emmerson in Sailor Moon Super S: The Movie. In the Viz Media English adaptation, she is voiced by Debi Derryberry.
The Sailor Starlights (セーラー スター ライツ, Sērā Sutāraitsu) are a group of Sailor Soldiers composed of Sailor Star Fighter, Sailor Star Maker, and Sailor Star Healer; in civilian form they go by the pseudonyms Kou Seiya, Kou Taiki, and Kou Yaten, respectively. They come from the fictional planet Kinmoku, whose princess, Princess Kakyuu, left the planet to escape Sailor Galaxia 's assault and to heal her wounds. The Starlights abandon Kinmoku and track Kakyuu to Earth and then Japan, where the Starlights disguise themselves as a male pop star group called The Three Lights (スリー ライツ, Surī Raitsu) and embed their music with a telepathic broadcast in order to attract Kakyuu 's attention. The Three Lights all attend Jūban High School along with Usagi and her friends. Eventually, on their way to the Galaxy Cauldron, they are killed by Galaxia 's servants Sailor Chi and Sailor Phi.
In the anime, the Starlights were given a major role. The trio are biologically males in their civilian forms, becoming women when transforming into Sailor Soldiers, as opposed to their manga counterparts that are females who disguise themselves as males in their civilian forms. As Starlights, they distance themselves from the other Sailor Soldiers, deeming that Earth is not their responsibility. The Starlights survive several direct battles with Galaxia herself, and help Sailor Moon defeat Chaos to save Galaxia. Takeuchi expressed surprise at Toei Animation 's decision to make the Starlights lead characters in the anime adaptation, but was even more shocked by their treatment of the Starlights ' sex. The change was directly overseen by director Takuya Igarashi. In the Italian dub, instead of changing sex, there were six people -- the Three Lights were always men, and simply summoned their twin sisters instead of transforming, as the original depiction was very controversial in Italy.
The Starlights are featured in several of the Sailor Moon musicals (Sailor Stars, Eien Densetsu, and their revised editions, plus Ryuusei Densetsu, and Kakyuu - Ouhi Kourin). While played by women, it is meant to be ambiguous as to whether or not they take on male forms (like in the anime) or are cross-dressers (like in the manga), though their personalities are clearly from the anime. Their story also combines elements from both the manga and the anime; for instance, they travel to the Galaxy Cauldron as they do in the manga, but survive the battles against Galaxia as they do in the anime. The pairings with the Sailor Soldiers from the anime are also featured in some musicals.
Their exact relationship to each other is unknown; according to the manga they are not siblings. Their surname "Kou '' (光) translates to "light '', among other things, making the name "Three Lights '' a pun. In the original English manga, "Kou '' was translated to "Lights '' and was used as their shared family name.
When Takeuchi originally designed the Sailor Starlights, she did so without their ponytails, but Bandai explained to her that short - haired dolls were difficult to make. Describing herself as having a "soft spot for dolls '', Takeuchi eventually added the ponytails.
Kou Seiya (星野 光, Seiya Kō) is the leader of the Starlights as Sailor Star Fighter (セーラー スター ファイター, Sērā Sutā Faitā) and the lead vocalist for the Three Lights. In general, Seiya acts arrogant and tends to be, at least on the surface, confident in his / her own abilities.
Seiya becomes the star player of the local high - school American football team and the school 's star athlete, upsetting Haruka Tenoh who was the school 's previous star athlete on the track and field team. Eventually, she raises the suspicions of the Sailor Soldiers as to her identity. In the anime, Taiki and Yaten consider him prone to bouts of childishness (such as when he shows off his basketball skills in front of the school), but generally follow his lead.
Eventually, Seiya develops strong feelings for Usagi and his attempts to forge a bond with her provides the primary romantic tension of the season. Seiya calls Usagi odango, like Mamoru does. The two even go on a date at an amusement park, which prematurely ends when Sailor Iron Mouse attacks. Seiya makes his interest in her clear when they spend time together practicing softball, telling her, "I like your light. '' However, Seiya 's feelings are not fully reciprocated and he / she acknowledges the one - sided romance.
The relationship between Sailor Star Fighter and Princess Kakyuu is slightly ambiguous. In the anime, when he daydreams of his home planet, he thinks lovingly of an image of his princess, which is suddenly superimposed by an image of Usagi (much as Usagi had seen Seiya 's image overlaid by Mamoru in previous episodes). In the image poem released for his CD single, however, he suggests that his feelings for her are because he is "carrying the heart of a boy '' and because he was attracted to her light.
Seiya 's responsibilities in the band are lead vocals, guitar, and lyrics. He was seen in the anime playing the drums very angrily in their hideout because they think their princess has not heard them yet. After this, Seiya is not seen on the drums again. According to Naoko Takeuchi 's words, when she created this character it was meant to be a combination between Haruka and Mamoru, and was modeled after Jenny Shimizu.
In the original Japanese version of the anime series, he / she was voiced by Shiho Niiyama in one of her final roles before her death. In the musicals, Seiya has been portrayed by Sayuri Katayama, Chinatsu Akiyama and Meiku Harukawa.
Kou Taiki (大気 光, Taiki Kō), better known as Sailor Star Maker (セーラー スター メイカー, Sērā Sutā Meikā), is the most intellectual of the trio. His / Her abilities rival that of Ami Mizuno, though he / she considers her romantic notions foolish. In the anime, Ami 's appeal for him to see the good in dreaming does begin to have an effect, however. In combat with a phage, Star Maker is the first of the Starlights to willingly allow Sailor Moon to heal the monster rather than trying to kill it herself, because it had been a teacher who Ami respected. Later in the series, as he is beginning to lose hope in finding Princess Kakyuu, he visits a sick girl named Misa in the hospital. She shows him a drawing of the Princess that she sees when she listens to the Three Lights ' song. With renewed hope, Taiki returns to the Three Lights. In the anime, he sometimes wears glasses.
Like Yaten, Taiki believes that Seiya should stay away from Usagi after learning she is Sailor Moon, despite their wish, shared by Princess Kakyuu and the Sailor Soldiers, for them all to work together. However, his views on Usagi change for the better near the end of Sailor Stars. He is the most cool - headed of the trio.
Taiki 's responsibilities in the band are background vocals, keyboards, and composition. He / she also enjoys poetry and belongs to the literature club at school. Taiki is meant to be a more - distant Setsuna Meioh.
In the original Japanese version of the anime series, he / she was voiced by Narumi Tsunoda. In the musicals, Taiki has been portrayed by Hikari Ono, Akiko Nakayama, and Riona Tatemichi.
Kou Yaten (夜 天 光, Yaten Kō), better known as Sailor Star Healer (セーラー スター ヒーラー, Sērā Sutā Hīrā), is a lonely person who does not like to socialize or play sports. His / Her remarks are often sharp - edged and blunt, which further separates him / her from the world. At one point, the other Starlights even chastise Yaten for behaving in a way that might reduce the number of fans. Yaten does not interact with the people around him / her much, wanting to focus on the mission. Yaten is egotistical and nurses grudges, and hates injury. However, he / she and Luna get along well.
In the anime, he has the most spiritual awareness of the Starlights, and is able to tell when Star Seeds are taken by Sailor Galaxia. He views humans as untrustworthy and wants to find Princess Kakyuu so they can leave Earth as quickly as possible. This comes further to light when they discover that Usagi is Sailor Moon. Yaten believes that Seiya should stay away from Usagi, despite their wish, shared by Princess Kakyuu and the Sailor Soldiers, for them all to work together. His view is shared by Taiki as well as Sailors Uranus, Neptune, and Pluto. However, also like Taiki, his views on Usagi change for the better near the end of Sailor Stars. In the anime, Yaten is shown to be physically stronger in his civilian form than Makoto Kino, the strongest of the Sailor Soldiers, in her civilian form.
Yaten 's responsibilities in the band are background vocals, bass guitar, and song arrangement. He also enjoys photography but does not belong to any school club, preferring to just go home.
In the original Japanese version of the anime series, he / she was voiced by Chika Sakamoto. In the musicals, Yaten has been portrayed by Momoko Okuyama, Mikako Tabe, and Saki Matsuda.
Ikuko Tsukino (月野 育子, Tsukino Ikuko) is the mother of Usagi (Sailor Moon). She is often seen cooking and lecturing Usagi for her grades in school; still, they are shown to be pretty close, since she gives Usagi advice on relationships of all kinds from time to time, and eagerly accepts her relationship with Mamoru. She cares for Chibiusa when she is present, whom she believes to be her niece, but who in truth is her future granddaughter. She also cares for Chibichibi, whom she believes to be her second daughter. Ikuko 's name and design are modeled after Takeuchi 's mother.
In the live - action series, Ikuko is portrayed as an extremely outgoing, quirky, and determined person. She changes her hairstyle almost every day, is constantly trying out new (and questionable) omelette recipes, and loves nothing more than being in the spotlight. She is even high - school friends with Minako 's manager, and it is said the two of them were big participants in their school 's theater program.
In the original Japanese series, Ikuko is voiced by Sanae Takagi in the first anime and by Yuko Mizutani in Crystal until her death in 2016. In the DIC and Cloverway English dubs, she is voiced by Barbara Radecki. In the Viz Media English dub, her voice is supplied by Tara Platt. Kaori Moriwaka portrays Ikuko in the live - action series.
Kenji Tsukino (月野 謙之, Tsukino Kenji) is Usagi 's father, a stereotypical well - meaning Japanese salaryman, who works as a magazine reporter and later as an editor - in - chief. Kenji is quite affectionate with his wife. Early on, he becomes jealous when he sees Usagi with Mamoru Chiba, thinking that Umino is a better candidate. Like his wife, Kenji is entirely unaware of Usagi 's real identity, though he is the only member of the family who notices the similarities between Sailor Moon and Usagi. He senses a maturity in his daughter when she is finally aware of her status as Princess Serenity, and notes that at times her beauty seems serene. Kenji appears less frequently after the anime adaptation 's second season.
In the live - action series, he never appears in the main body of the series, which is explained by his always being away on business trips. He appears briefly in the direct - to - DVD Special Act, crying at Usagi 's wedding.
In the anime series, Kenji is voiced by Yuji Machi in the first series and by Mitsuaki Madono in Crystal. In the DIC / Cloverway English adaptation, he is voiced by David Huband. In the Viz Media English adaptation, he is voiced by Keith Silverstein. In the Special Act of the live - action series, he is portrayed by series director Ryuta Tasaki.
Shingo Tsukino (月野 進悟, Tsukino Shingo, called Sammy Tsukino in the original English dub) is the younger brother of Usagi, making her the only Sailor Soldier with a known sibling. His influence in her life is alternately helpful and mocking; he considers her well - meaning, but also a crybaby and accident - prone. Though unaware of his sister 's true identity, Shingo is impressed by the media - hyped urban legends of Sailor Moon and Sailor V. He is a particularly enthusiastic fan of Sailor Moon because she rescued him from Dark Kingdom forces early in her career. He enjoys video games and is a diligent student. Shingo 's favorite book is Shonen J * mp (a reference to the manga anthology Weekly Shōnen Jump), and he likes to play games on the Famicom. In the anime, Shingo appears in several episodes of the first season, but is less frequently seen afterwards.
In the live - action series, Shingo dislikes much of what his sister and mother do, and does not care about much of life in general. In the video game Another Story, Shingo is temporarily granted a large role, as he is kidnapped for ransom by the villains in an attempt to force Usagi to hand over the Silver Crystal.
In the original Japanese series, Shingo was voiced by Chiyoko Kawashima until her retirement in 2001, with Seria Ryū taking over the role afterward in Crystal. In the DIC / Cloverway English adaptation, he is voiced by Julie Lemieux. In the Viz Media English adaptation, his voice is supplied by Nicolas Roye. In the live - action series, he is portrayed by Naoki Takeshi.
Naru Osaka (大阪 なる, Ōsaka Naru, called Molly Baker in the original English dub) is Usagi 's best friend and schoolmate at the start of the series. Naru and her mother are the very first victims of a monster attack, and Naru hero - worships Sailor Moon for saving them. Throughout the series she continues to be a frequent target of villains and monsters. In a "memorable subplot '' of the anime adaptation, Naru falls in love with Nephrite, who eventually returns her feelings and attempts to atone for his misdeeds. His death while protecting Naru devastates her throughout the first season. Kotono Mitsuishi was particularly touched by this sequence. In the anime, Naru ends up dating Gurio Umino.
Naru plays a much more important role in the live - action series, learning most of the truth about the Sailor Soldiers. She is also a more confident and outgoing person. For a short while, she and Ami share a conflicted relationship as both seem to be jealous of the other 's closeness with Usagi. However, they resolve their differences and become good friends.
Naru 's younger sister, Naruru, features in a short side - story in the Stars manga. In the anime it is stated that she is an only child. Naruru at first appears with Haruka, Michiru, and Usagi at the high school and is shown getting along with them.
Naru is voiced by Shino Kakinuma in the original series and by Satomi Satō in Crystal. In the DIC / Cloverway English versions, she is voiced by Mary Long in a heavy Brooklyn accent. In the Viz Media English version, she is voiced by Danielle Judovits. Chieko Kawabe portrays her in the live - action series.
Gurio Umino (海野 ぐりお, Umino Gurio, called Melvin Butlers in the original English dub) is a student in Usagi 's class at school. He is usually called simply Umino, and begins with a severe infatuation with Usagi. His defining characteristic is his glasses, which have odd swirls in them, denoting their thickness. Umino is commonly portrayed as "nerdy '', "weird '', and "know - it - all '' otaku, regularly keeping Usagi informed on current events, new students, gossip, and any other information she might appreciate. Despite his ordinarily nerdy appearance, Umino is implied (and later confirmed by Takeuchi) to be incredibly handsome when he takes his glasses off. In the first anime, he eventually ends up dating Naru, and like her, his importance gradually decreases after the first anime series.
The kanji in Umino 's surname represent a pun meaning either "ocean field '' or "of ocean ''; as such, it is constructed in the same way as Usagi 's and those of other Sailor Soldiers. His first name, Gurio, is given in hiragana and so its meaning is unclear.
In the Japanese series, his voice actor is Keiichi Nanba in Sailor Moon and Daiki Yamashita in Crystal. In the DIC / Cloverway English adaptation, he is voiced by Roland Parliament. In the Viz Media English adaptation, his voice is supplied by Ben Diskin.
Haruna Sakurada (桜田 春菜, Sakurada Haruna, called Patricia Haruna in the original English dub) is a junior high school teacher who often lectures Usagi for her laziness. Haruna intends to find a husband, which makes her an easy target for the Dark Kingdom during the first arc, and she often engages in seemingly childish things in this regard. She appears less frequently as the series progresses, and is never seen after Usagi and her friends start high school. In the live - action series, Haruna assigns pop quizzes and clean - up duty when needed. She has an extremely eccentric personality, and is very friendly and motherly towards her students, even Usagi.
The kanji in her name mean "cherry blossom '' (sakura), "rice field '' (da), "spring '' (haru), and "vegetables '' (na). The "spring '' part of her name becomes a pun in the context of other works by Takeuchi: Haruna appears very briefly in one earlier series, The Cherry Project, which features her sister Fuyuna in one of its side stories. Two other characters with similar names appear in Takeuchi works: Natsuna in Codename: Sailor V and Akina in PQ Angels. The Japanese words fuyu, natsu, and aki mean "winter '', "summer '', and "autumn '', respectively.
In the Japanese series, Haruna was originally voiced by Chiyoko Kawashima in Sailor Moon until her retirement in 2001. Akemi Kanda voices her from Crystal onwards. In the DIC English adaptation, she is voiced by Nadine Rabinovitch. In the Viz Media English adaptation, her voice is supplied by Julie Ann Taylor. She is played by Tomoko Otakara in the live - action series. In the musicals, Haruna is portrayed at various points by Kasumi Hyuuga and Kiho Seishi.
Motoki Furuhata (古幡 元 基, Furuhata Motoki, called Andrew Hansford in the original English dub) works at the Crown Game Center, a video arcade Usagi frequently visits. In A Scout is Born, an adaptation of the anime 's first three episodes by Stuart J. Levy, he is called Andrew Foreman. Motoki also holds a job at the Crown Fruit Parlor and is a KO University student along with Mamoru Chiba. After he recognizes the Sailor Soldiers and learns their true identities, Motoki vows not to tell anyone. In the anime adaptation, Usagi calls him "Big Brother '' Motoki (元 基 お 兄さん, Motoki - oniisan) and has a crush on him in the beginning of the series. Motoki and Mamoru also attend the Azabu Institute of Technology. He is pretty naive, and says that he views the girls as younger sisters, oblivious to the fact that they have crushes on him. He has a little sister, Unazuki Furuhata, who is friends with Usagi and the others. His girlfriend is Reika Nishimura, a science student. Later throughout the series, it is revealed that he and Reika knew Setsuna while she was studying at their university. In the continuity of Sailor Moon Crystal, Motoki 's background is the same, but he does not know Mamoru, who is still a high school student.
In the live - action series, the Crown Center is a karaoke parlor. There is an initial recurring flirtatious relationship between Motoki and Makoto until it becomes a bit more serious, and in the Special Act, which takes place four years after the series finale, Motoki proposes to Makoto, who accepts.
In the Japanese series, Motoki is voiced by Hiroyuki Satō in Sailor Moon and by Hiroshi Okamoto in Crystal. In the DIC English adaptation, he is voiced by Colin O'Meara, in the Cloverway one by Joel Feeney. In the Viz Media English adaptation, his voice is supplied by Lucien Dodge. Motoki is portrayed by Masaya Kikawada in the live - action series.
Reika Nishimura (西村 レイカ, Nishimura Reika, called Rita Blake in the original English dub) is Motoki Furuhata 's girlfriend and fellow student at KO University. She later befriends Setsuna Meioh there. In the anime, she is the reincarnation of the Great Youma Rikokeidā. After leaving Japan twice to study abroad, she eventually leaves the country for 10 years, but Motoki is still willing to wait for her. She is voiced by Rica Fukami in the original series and by Mai Nakahara in Crystal. In the DIC English adaptation, Reika was voiced by Wendy Lyon and Kathleen Laskey, while Daniela Olivieri voiced her in the Cloverway dub. In the Viz Media English adaptation, she is voiced by Erica Mendez. In the English dub, she is called "Reika '' in Super S.
Rei 's grandfather (レイ の おじいさん, Rei no ojiisan) is the grandfather of Rei Hino and a Shinto priest that lives at the Hikawa Shrine. In the anime, he has a different physical appearance and plays a more - prominent role as one of the holders of the Rainbow Crystals that make up the Silver Crystal. He often flirts with anyone regardless of gender. In the original Japanese series, his voice actor is Tomomichi Nishimura in the first anime. In the DIC / Cloverway English adaptation, he is voiced by David Fraser, except in Sailor Moon S episode 99 where he was voiced by John Stocker as a stand - in. In the Viz Media English adaptation, his voice is supplied by Michael Sorich.
Yūichirō Kumada (熊田 雄一郎, Kumada Yūichirō, called Chad Kumada in the original English dub) is an anime - only character appearing as a ragged - looking young man who helps out at the Hikawa Shrine. His family is very rich and has a lodge in the mountains, where he takes Rei Hino and her friends to go skiing. After falling in love with Rei, Yūichirō decides to stay at the Hikawa Shrine in order to be near to her. Even though she does not reciprocate his love, he remains faithful and tries to protect her. She warms up to his personality considerably over time. In the first Japanese anime series, Yūichirō is voiced by Bin Shimada. In the DIC / Cloverway English adaptation, he is voiced by Steven Bednarski, Damon D'Oliveira (Sailor Moon S), and Jason Barr (Sailor Moon Super S). In the Viz Media English adaption, he is voiced by Wally Wingert.
Unazuki Furuhata (古幡 宇奈月, Furuhata Unazuki, called Elizabeth "Lizzie '' Furuhata in the original English dub) is the younger sister of Motoki Furuhata who works as a waitress at the Crown Fruit Parlor, where the Sailor Soldiers spend much of their free time in the latter parts of the anime. Unazuki attends T * A Private Girls School with Rei Hino. She first appears sporadically, with her first appearance in Sailor Moon R as a mistaken love rival for Mamoru Chiba. She dreams of her first kiss in Sailor Moon S, which results in being targeted by the Death Busters. Unazuki appears more frequently in Super S as a major supporting character and is usually among Usagi 's group. She is voiced by Miyako Endou in the first series, with Eriko Hara as a stand - in. In the DIC English adaptation, she is voiced by Sabrina Grdevich and in the Cloverway dub by Catherine Disher in Sailor Moon S and Susan Aceron in Super S. In the Viz Media English adaptation, she is voiced by Veronica Taylor.
Ittou Asanuma (浅沼 一 等, Asanuma Ittō) is introduced in the Black Moon arc of the manga as Makoto 's friend. He is interested in science fiction, UFOs and the paranormal activity that occurs in the area. He greatly respects Mamoru, who is an upperclassman at his school. Asanuma initially thinks that the Sailor Soldiers are aliens. However, after he sees Luna talk, Makoto confesses the Soldiers ' identities to him. Asanuma is later attacked by Ayakashi sister Calaveras until he is rescued by Sailor Moon. At the beginning of the Infinity arc he appears with Mamoru and Chibiusa in an amusement park, and in the Stars arc he gives Mamoru 's phone number to Usagi when she is unable to locate him. Asanuma makes a cameo in the anime, looking for Mamoru when the latter has been controlled by Queen Nehelenia. He is voiced by Kazuya Nakai in the original series and by Daisuke Sakaguchi in Crystal. In the Viz Media English dub, he is voiced by Greg Felden.
Momoko Momohara (桃原 桃子, Momohara Momoko) appears as an elementary - school student who befriends Chibiusa. In the anime, she is badly injured in a fight with Chiral and Achiral, two Black Moon members, causing Chibiusa to go into a fit and unleash her latent powers at the monsters. Later, Momoko becomes the first target of the Amazoness Quartet, but is saved by Sailor Chibi Moon and Sailor Moon. She is voiced by Taeko Kawata. In the DIC / Cloverway English adaptation, her name is changed to Melissa and later Melanie, and her voice is supplied by Mary Long and Tanya Donato at various points. In the Viz Media English adaptation, she is voiced by Debi Derryberry.
Kyūsuke Sarashina (更 科 九 助, Sarashina Kyūsuke, called Kelly Sarashina in the English dub) attends elementary school with Chibiusa and Momoko. He is the younger brother of Kotono, who goes to school with Rei. He is known to be very athletic and sarcastic. Kyūsuke makes recurring appearances in Sailor Moon Super S, and is targeted by Amazoness JunJun in episode 155. He appears in a later episode, when Chibiusa befriends a boy named Hiroki, who is trying to build a flying machine. While Kyūsuke is initially resentful of Hiroki and how impressed Chibiusa is with Hiroki 's dream, Kyūsuke encourages Hiroki continue building the flying machine after multiple failed attempts. He is voiced by Kazumi Okushima in his initial appearance, and by Daisuke Sakaguchi in all subsequent appearances.
Queen Serenity (クィーン ・ セレニティ, Kuīn Sereniti) is the mother of Princess Serenity. As the Queen of the Moon, she reigns during the Silver Millennium. She states that the ancients have known her as the goddess Selene. When the Dark Kingdom attacks the Moon Kingdom, she sacrifices herself by using the Silver Crystal to seal Queen Metaria and to have her daughter, Endymion and the Sailor Soldiers be reborn on Earth. Queen Serenity first appears as a hologram, having saved her spirit within a computer in order to preserve her will. She tells the Sailor Soldiers of their past lives, which they begin to remember as she describes them, and tells them that they must find Metaria, who has escaped the seal placed on her and gone into hiding on Earth. She only appears in flashbacks after this. In the anime adaptation 's second season, Queen Serenity allows Usagi to gain her latest transformation. She also appears in the "Special Act '' of the live - action series.
She is voiced by Mika Doi in the first anime series, with Mami Koyama taking over the role for Crystal. In the DIC / Cloverway English adaptation, she is voiced by Barbara Radecki in the first episode and later by Wendy Lyon. In the Viz Media English adaptation, she is voiced by Wendee Lee. In the live - action series, Miyuu Sawai portrays Queen Serenity, with her voice dubbed over by Yoko Soumi.
Phobos (フォボス, Fobosu) and Deimos (ディモス, Dimosu) are Rei 's pet crows that live at the shrine, which she named after the two moons of Mars. They have the ability to sense evil, and sometimes attack enemies. It is revealed that when Rei was a child, they "told '' her their names. Eventually, they reveal themselves as the Power Guardians -- small humanoid sprites charged with guarding Sailor Mars. They save Sailor Mars from being killed by Tiger 's Eye and give her Sailor Crystal to her. They are later revealed to be from the planet Coronis when they encounter Sailor Lead Crow, who is also from Coronis. Sailor Lead Crow steals Phobos and Deimos ' Star Seeds, killing them. The two of them have Star Seeds on a level near or equal to a Sailor Crystal.
In the live - action series, Rei 's crows appear only in the third episode. In the Another Story video game, they go with her on the search for Jadeite 's stone. A fake Deimos and Phobos appear in crow form in the musical Sailor Moon S - Usagi - Ai no Senshi e no Michi. They were portrayed by male actors. Like Luna and Artemis before them, they are portrayed as adult actors in animal costumes.
Helios (祭司 エリオス, Saiji Eriosu) is the priest and guardian of Elysion, which is the sacred land that protects the planet Earth from within and the place where the Golden Kingdom used to be in the time of the Silver Millennium. Helios and Endymion never met, though they were aware of each other and the fact that they shared the same wish of protecting Earth. When Elysion is invaded by the Dead Moon Circus, Helios is sealed in the body of an alicorn, Pegasus (一角 天馬 (ペガサス), Pegasasu), and placed inside a cage. Remembering a woman he had seen in a vision, he sends out his spirit in the form of Pegasus to seek her help. He asks Sailor Moon and Sailor Chibi Moon for aid, giving them information and new weapons. He and Chibiusa become close, and he eventually discovers that her adult version is the woman he had seen. In the end, when the enemy is defeated and he has departed on the back of the "real '' Pegasus, Chibiusa thinks to herself that when she has grown up, he will become her "prince ''.
Helios is assisted by the Maenads (メナード, Menado), two priestesses who guard a shrine in Elysion. They escaped the Dead Moon Circus curse by falling asleep. The Maenads eventually awaken and guide Chibiusa to Helios, and later appear along with the main characters after Nehelenia 's defeat.
In the anime adaptation, Helios guards the Golden Crystal that protects the dreams of Earth 's people. He is attacked by the Dead Moon Circus for this reason, and leaves his own body to flee with the crystal. Taking the form of Pegasus, he places the crystal on his forehead as a horn and hides in Chibiusa 's dreams. There, he asks for her help and grants power to her and to her allies using several special items. Though he does not trust Chibiusa at first, they gradually develop a connection, and in the end he tells her his secrets.
In the anime series, he is voiced by Taiki Matsuno in Sailor Moon. In the Cloverway English adaptation, he is voiced by Rowan Tichenor and in the Viz Media English adaptation, he is voiced by Chris Niosi. In the musicals, Pegasus is voiced by Yuuta Enomoto.
Takeuchi stated that she was dissatisfied with Helios ' clothing design, having created his outfit in a hurry because it was easy to draw and she was pressed for time. She describes the result as "ugly '' and "a disaster '', commenting that the character inherited his "irresponsible ways '' from herself.
Princess Kakyuu (火球 皇女 (プリンセス), Kakyū Purinsesu) is the princess of Kinmoku, a fictional planet outside of the Solar System that is also the home of the Sailor Starlights, who are Kakyuu 's protectors and spend much of the story searching for her. Kinmoku is attacked and destroyed by Sailor Galaxia, while Kakyuu is injured during the battle. She can not reveal herself until her wounds are healed, and the Starlights lose contact with her. She travels to Earth because she senses the birth of the Silver Crystal, and hides in a censer that is guarded by Chibichibi. She has her own soldier form, Sailor Kakyuu, and later reveals to Sailor Moon that her own lover had died in the war against Galaxia. Kakyuu eventually reunites with the Starlights and accompanies Sailor Moon to Zero Star Sagittarius to confront Galaxia, but is mortally wounded by Sailor Chi. She dies in Sailor Moon 's arms, saying that she wants to be reborn, maybe in a world without war, but at the very least to be with everyone again. In the manga, Kakyuu had a lover who was killed by Galaxia.
In the anime adaptation, Kakyuu goes to Earth to locate the "Light of Hope '' and to hide from Galaxia. During her time under Chibichibi 's care, she is aware of the Starlights searching for her, but can not reveal herself too soon. She eventually saves Sailor Moon and the others from a black hole and resumes leadership of the Starlights. However, after Kakyuu is found, Galaxia steals her Star Seed, killing her. After Sailor Moon defeats Chaos, Kakyuu is revived. She and the Starlights return to Kinmoku to rebuild and start over. Her Sailor Soldier form is never shown in this adaptation.
In the original Japanese series, her voice actress is Sakiko Tamagawa. In the musical version, Princess Kakyuu is portrayed by Sakoto Yoshioka, Ai Toyama, and Asami Okamura.
Chibichibi (ちびちび) first appears in Act 44 of the manga and episode 182 of the anime. She appears to be a very young child and imitates the ends of others ' sentences, mostly saying "chibi ''. Her red - pink hair is always up in two heart shaped odango with little ringlets sticking out the sides, echoing Usagi 's hairstyle. Chibichibi 's name is a doubling of the Japanese term meaning "small person '' or "small child '' and is used both for that reason and because of Chibichibi 's similarity to Chibiusa. It is also a pun, as the word chibichibi means "making something last ''.
Chibichibi is first shown floating down to Earth with an umbrella in her hand and shows up at the Tsukino house. In the anime, she first meets Usagi in the park one afternoon and starts to follow her around, saying only "chibi chibi '' without having been prompted. Chibichibi attaches herself to Usagi 's family, whose memories are modified so that they believe her to be the youngest child of the family -- almost exactly what Chibiusa had done on her first appearance. Chibichibi is the caretaker of a small ornate censer in which Princess Kakyuu is resting, hidden from the evil Sailor Galaxia. Chibichibi eventually transforms, under her own power, into a Sailor Soldier called Sailor Chibichibi. In her Sailor Soldier form, she carries a heart scepter and uses it to defend herself and Sailor Moon, but is not shown using any attack of her own. Chibichibi 's childlike form is a disguise for Sailor Cosmos, a powerful Sailor Soldier who is the future version of Sailor Moon.
In the anime, Chibichibi is the Star Seed of Sailor Galaxia, who had once been a great force for good. When Galaxia fought Chaos, she could see no way to defeat it except to seal it away inside her own body. In order to protect her Star Seed from being corrupted, she sent it away to Earth, where it became Chibichibi. Chibichibi is referred to as the "light of hope '' (kibō no hikari) by the Starlights; their one chance for defeating Galaxia. In the end, Chibichibi transforms herself into the Sword of Sealing (fuuin no ken), the weapon Galaxia had used to seal away Chaos, and Chibichibi begs Sailor Moon to use it to defeat them. During the battle, Galaxia shatters the sword, killing Chibichibi. However, Chibichibi is revived along with all the other fallen Sailor Soldiers after Sailor Moon cleanses Galaxia of Chaos.
In the anime series, Chibichibi and Sailor Moon are voiced by Kotono Mitsuishi in a dual role. In the stage musicals, Chibichibi has been played by Mao Kawasaki, Mikiko Asuke, Yuka Gouchou, and Mina Horita. Takeuchi praised Kawasaki 's cuteness as Chibichibi. When she appears in the stage musicals, Chibichibi 's backstory always follows the anime version. She is given her own song, "Mou ii no '' (English: It 's All Right), which she sings to announce that she has come to rejoin Galaxia.
Sailor Cosmos (セーラー コスモス, Sērā Kosumosu) is the ultimate future form of Sailor Moon. She comes from a future which has been destroyed by the battle with Sailor Chaos; after ages of fighting, she despairs and flees to the past as the infant Chibichibi, to encourage Eternal Sailor Moon to defeat Chaos in the final battle of the series. At first, she wants Sailor Moon to destroy the Galaxy Cauldron altogether, ensuring Chaos ' destruction, but Sailor Moon protests, realizing that if the Cauldron is destroyed no more stars will be born, leaving the Galaxy without a future. She chooses to sacrifice herself to the Cauldron and seal Chaos away, which Cosmos finally realizes to have been the right decision. Reminded of the strength and courage she herself needs to have, she returns to the future with new hope. After the end of the anime adaptation, Takeuchi commented that she wished Sailor Cosmos had been used in Sailor Moon Sailor Stars. In the musicals, Sailor Cosmos is played by Satomi Okubo, who played Usagi Tsukino / Sailor Moon between 2013 and 2015.
Differences in character between the Sailor Soldiers mirror differences in their hairstyles, fashion, and magical items, which has translated well into doll lines. Sales of the Sailor Soldiers fashion dolls overtook those of Licca - chan in the 1990s. Mattel attributed this to the "fashion - action '' blend of the Sailor Moon storyline; doll accessories included both fashion items and the Soldier 's weapons. The first line of dolls included Queen Beryl, the first major antagonist of the series, a decision that was described as a "radical idea ''. The first dolls based on Chibiusa surprised Takeuchi because, at that time, the author had not even finalized the character 's hairstyle, and explains that viewing the doll 's head from various angles was wonderful. Bandai introduced a line of little dolls that included the Amazoness Quartet and, according to Takeuchi, these were their favorite because "with their costumes and faithfulness to the originals, the dolls really excelled. '' Bandai has released several S.H. Figuarts based on the characters ' appearances from both the first anime adaptation and Sailor Moon Crystal. Among those figures are the Sailor Soldiers, Tuxedo Mask, Black Lady, and Zoisite disguised as Sailor Moon. In early 2014, Megahouse released a set of trading figures consisting of twelve figurines, two for each Sailor Soldier and two for Tuxedo Mask.
Several characters, including Sailor Soldiers, villains, supporting characters, and monsters of the day are featured in a collectible card game which was released in 2000 by Dart Flipcards. A collaboration between Sailor Moon and Capcom took place in March 2018 as part of the 25th anniversary celebration of the Sailor Moon franchise. In this collaboration, the Felyne cat companion resembles Luna and wields Usagi 's Cutie Moon Rod weapon in the Monster Hunter XX expansion of Monster Hunter Generations.
Sailor Moon has been described largely in terms of its characters; a sustained 18 - volume narrative about a group of young heroines who are simultaneously heroic and introspective, active and emotional, dutiful and ambitious. The combination proved extremely successful, and Sailor Moon became internationally popular in both manga and anime formats.
The function of the Sailor Soldiers themselves has been analyzed by critics, often in terms of feminist theory. Susan J. Napier described the Sailor Soldiers as "powerful, yet childlike '', and suggested that this is because Sailor Moon is aimed towards an audience of young girls. She stated that the Sailor Soldiers readily accept their powers and destinies and do not agonize over them, which can be read as an expression of power and success. The Sailor Soldiers have been described as merging male and female traits, being both desirable and powerful. As sexualized teen heroines, they are significantly different from the sexless representation of 1980s teen heroines such as Nausicaä. Anne Allison noted that the use of the sailor fuku as a costume makes it easy for girls to identify with the Sailor Soldiers, but also for older males to see them as sex symbols. Unlike the female Power Rangers, who as the series go on become more unisex in both costume and poses, the Sailor Soldiers ' costumes become frillier and more "feminine ''.
Mary Grigsby considered that the Sailor Soldiers blend ancient characteristics and symbols of femininity with modern ideas, reminding the audience of a pre-modern time when females were equal to males, but other critics drew parallels with the modern character type of the aggressive cyborg woman, pointing out that the Sailor Soldiers are augmented by their magical equipment. Much of the Sailor Soldiers ' strength stems from their reliance and friendship with other girls rather than from men.
Kazuko Minomiya has described the daily lives of the girls within the series as risoukyou, or "utopic ''. They are shown as enjoying many leisure activities such as shopping, visiting amusement parks, and hanging out at the Crown Arcade. According to Allison, Minomiya points out that the depiction of life is harder and more serious for male superheroes. The characters "double '' as ordinary girls and as "celestially - empowered superheroes ''. The "highly stylized '' transformation that the Sailor Soldiers go through has been said to "symbolically separate '' the negative aspects of the characters (laziness, for example) and the positive aspects of the superheroine, and gives each girl her unique uniform and "a set of individual powers ''. Some commentators have read the transformation of the Sailor Soldiers as symbolic of puberty, as cosmetics appear on the Soldiers and their uniforms highlight cleavages, slim waists, and long legs, which "outright force the pun on heavenly bodies ''.
Jason Thompson found the Sailor Moon anime reinvigorated the magical girl genre by adding dynamic heroines and action - oriented plots. Following its success, similar series, such Magic Knight Rayearth, Wedding Peach, Nurse Angel Ririka SOS, Revolutionary Girl Utena, Fushigi Yuugi and Pretty Cure, emerged.
|
who sings ain't that a kick in the head | Ai n't That a Kick in the Head? - Wikipedia
"Ai n't That a Kick in the Head? '' is a pop song written in 1960 with music by Jimmy Van Heusen and lyrics by Sammy Cahn. It was first recorded on May 10, 1960, by Dean Martin in a swinging big band jazz arrangement conducted by Nelson Riddle. Martin performed the song in the 1960 heist film Ocean 's 11 in an alternate arrangement featuring vibraphonist Red Norvo and his quartet.
The song has been recorded by many performers, including Wolfgang Parker, Robbie Williams, David Slater, Dean Martin, Ray Quinn, Hazell Dean and the Cherry Poppin ' Daddies. It also appeared in the soundtracks for films including Ocean 's Thirteen, Goodfellas, Mission: Impossible -- Ghost Protocol and A Bronx Tale, and the video games Fallout: New Vegas and Mafia II. It was later used as the soundtrack to a commercial for Budweiser, which was broadcast during Super Bowl XLI.
Irish boy band Westlife covered "Ai n't That A Kick In The Head '' as the third and final single from their fifth studio album... Allow Us to Be Frank in 2004.
Martin 's daughter, Deana Martin, recorded "Ai n't That A Kick in the Head '' in 2005. The song was released on the album Memories Are Made of This in 2006 by Big Fish Records.
The song was released as a duet with Kevin Spacey for the posthumous album Forever Cool (2007).
|
where's washington dc located on a map | Washington, D.C. - Wikipedia
Washington, D.C., formally the District of Columbia and commonly referred to as "Washington '', "the District '', or simply "D.C. '', is the capital of the United States.
The signing of the Residence Act on July 16, 1790, approved the creation of a capital district located along the Potomac River on the country 's East Coast. The U.S. Constitution provided for a federal district under the exclusive jurisdiction of the Congress and the District is therefore not a part of any state. The states of Maryland and Virginia each donated land to form the federal district, which included the pre-existing settlements of Georgetown and Alexandria. Named in honor of President George Washington, the City of Washington was founded in 1791 to serve as the new national capital. In 1846, Congress returned the land originally ceded by Virginia; in 1871, it created a single municipal government for the remaining portion of the District.
Washington had an estimated population of 681,170 as of July 2016. Commuters from the surrounding Maryland and Virginia suburbs raise the city 's population to more than one million during the workweek. The Washington metropolitan area, of which the District is the principal city, has a population of over 6 million, the sixth - largest metropolitan statistical area in the country.
The centers of all three branches of the federal government of the United States are in the District, including the Congress, President, and Supreme Court. Washington is home to many national monuments and museums, which are primarily situated on or around the National Mall. The city hosts 176 foreign embassies as well as the headquarters of many international organizations, trade unions, non-profit organizations, lobbying groups, and professional associations.
A locally elected mayor and a 13 ‐ member council have governed the District since 1973. However, the Congress maintains supreme authority over the city and may overturn local laws. D.C. residents elect a non-voting, at - large congressional delegate to the House of Representatives, but the District has no representation in the Senate. The District receives three electoral votes in presidential elections as permitted by the Twenty - third Amendment to the United States Constitution, ratified in 1961.
Various tribes of the Algonquian - speaking Piscataway people (also known as the Conoy) inhabited the lands around the Potomac River when Europeans first visited the area in the early 17th century. One group known as the Nacotchtank (also called the Nacostines by Catholic missionaries) maintained settlements around the Anacostia River within the present - day District of Columbia. Conflicts with European colonists and neighboring tribes forced the relocation of the Piscataway people, some of whom established a new settlement in 1699 near Point of Rocks, Maryland.
In his Federalist No. 43, published January 23, 1788, James Madison argued that the new federal government would need authority over a national capital to provide for its own maintenance and safety. Five years earlier, a band of unpaid soldiers besieged Congress while its members were meeting in Philadelphia. Known as the Pennsylvania Mutiny of 1783, the event emphasized the need for the national government not to rely on any state for its own security.
Article One, Section Eight, of the Constitution permits the establishment of a "District (not exceeding ten miles square) as may, by cession of particular states, and the acceptance of Congress, become the seat of the government of the United States ''. However, the Constitution does not specify a location for the capital. In what is now known as the Compromise of 1790, Madison, Alexander Hamilton, and Thomas Jefferson came to an agreement that the federal government would pay each state 's remaining Revolutionary War debts in exchange for establishing the new national capital in the Southern United States.
On July 9, 1790, Congress passed the Residence Act, which approved the creation of a national capital on the Potomac River. The exact location was to be selected by President George Washington, who signed the bill into law on July 16. Formed from land donated by the states of Maryland and Virginia, the initial shape of the federal district was a square measuring 10 miles (16 km) on each side, totaling 100 square miles (259 km).
Two pre-existing settlements were included in the territory: the port of Georgetown, Maryland, founded in 1751, and the city of Alexandria, Virginia, founded in 1749. During 1791 -- 92, Andrew Ellicott and several assistants, including a free African American astronomer named Benjamin Banneker, surveyed the borders of the federal district and placed boundary stones at every mile point. Many of the stones are still standing.
A new federal city was then constructed on the north bank of the Potomac, to the east of Georgetown. On September 9, 1791, the three commissioners overseeing the capital 's construction named the city in honor of President Washington. The federal district was named Columbia, which was a poetic name for the United States commonly in use at that time. Congress held its first session in Washington on November 17, 1800.
Congress passed the Organic Act of 1801, which officially organized the District and placed the entire territory under the exclusive control of the federal government. Further, the unincorporated area within the District was organized into two counties: the County of Washington to the east of the Potomac and the County of Alexandria to the west. After the passage of this Act, citizens living in the District were no longer considered residents of Maryland or Virginia, which therefore ended their representation in Congress.
On August 24 -- 25, 1814, in a raid known as the Burning of Washington, British forces invaded the capital during the War of 1812. The Capitol, Treasury, and White House were burned and gutted during the attack. Most government buildings were repaired quickly; however, the Capitol was largely under construction at the time and was not completed in its current form until 1868.
In the 1830s, the District 's southern territory of Alexandria went into economic decline partly due to neglect by Congress. The city of Alexandria was a major market in the American slave trade, and pro-slavery residents feared that abolitionists in Congress would end slavery in the District, further depressing the economy. Alexandria 's citizens petitioned Virginia to take back the land it had donated to form the District, through a process known as retrocession.
The Virginia General Assembly voted in February 1846 to accept the return of Alexandria and on July 9, 1846, Congress agreed to return all the territory that had been ceded by Virginia. Therefore, the District 's current area consists only of the portion originally donated by Maryland. Confirming the fears of pro-slavery Alexandrians, the Compromise of 1850 outlawed the slave trade in the District, although not slavery itself.
The outbreak of the American Civil War in 1861 led to expansion of the federal government and notable growth in the District 's population, including a large influx of freed slaves. President Abraham Lincoln signed the Compensated Emancipation Act in 1862, which ended slavery in the District of Columbia and freed about 3,100 enslaved persons, nine months prior to the Emancipation Proclamation. In 1868, Congress granted the District 's African American male residents the right to vote in municipal elections.
By 1870, the District 's population had grown 75 % from the previous census to nearly 132,000 residents. Despite the city 's growth, Washington still had dirt roads and lacked basic sanitation. Some members of Congress suggested moving the capital further west, but President Ulysses S. Grant refused to consider such a proposal.
Congress passed the Organic Act of 1871, which repealed the individual charters of the cities of Washington and Georgetown, and created a new territorial government for the whole District of Columbia. President Grant appointed Alexander Robey Shepherd to the position of governor in 1873. Shepherd authorized large - scale projects that greatly modernized Washington, but ultimately bankrupted the District government. In 1874, Congress replaced the territorial government with an appointed three - member Board of Commissioners.
The city 's first motorized streetcars began service in 1888 and generated growth in areas of the District beyond the City of Washington 's original boundaries. Washington 's urban plan was expanded throughout the District in the following decades. Georgetown was formally annexed by the City of Washington in 1895. However, the city had poor housing conditions and strained public works. Washington was the first city in the nation to undergo urban renewal projects as part of the "City Beautiful movement '' in the early 1900s.
Increased federal spending as a result of the New Deal in the 1930s led to the construction of new government buildings, memorials, and museums in Washington. World War II further increased government activity, adding to the number of federal employees in the capital; by 1950, the District 's population reached its peak of 802,178 residents.
The Twenty - third Amendment to the United States Constitution was ratified in 1961, granting the District three votes in the Electoral College for the election of president and vice president, but still no voting representation in Congress.
After the assassination of civil rights leader Dr. Martin Luther King, Jr., on April 4, 1968, riots broke out in the District, primarily in the U Street, 14th Street, 7th Street, and H Street corridors, centers of black residential and commercial areas. The riots raged for three days until more than 13,600 federal troops stopped the violence. Many stores and other buildings were burned; rebuilding was not completed until the late 1990s.
In 1973, Congress enacted the District of Columbia Home Rule Act, providing for an elected mayor and 13 - member council for the District. In 1975, Walter Washington became the first elected and first black mayor of the District.
On September 11, 2001, terrorists hijacked American Airlines Flight 77 and deliberately crashed the plane into the Pentagon in nearby Arlington, Virginia. United Airlines Flight 93, believed to be destined for Washington, D.C., crashed in Pennsylvania when passengers tried to recover control of the plane from hijackers.
Washington, D.C., is located in the mid-Atlantic region of the U.S. East Coast. Due to the District of Columbia retrocession, the city has a total area of 68.34 square miles (177.0 km), of which 61.05 square miles (158.1 km) is land and 7.29 square miles (18.9 km) (10.67 %) is water. The District is bordered by Montgomery County, Maryland, to the northwest; Prince George 's County, Maryland, to the east; and Arlington and Alexandria, Virginia, to the south and west.
The south bank of the Potomac River forms the District 's border with Virginia and has two major tributaries: the Anacostia River and Rock Creek. Tiber Creek, a natural watercourse that once passed through the National Mall, was fully enclosed underground during the 1870s. The creek also formed a portion of the now - filled Washington City Canal, which allowed passage through the city to the Anacostia River from 1815 until the 1850s. The Chesapeake and Ohio Canal starts in Georgetown and was used during the 19th century to bypass the Little Falls of the Potomac River, located at the northwest edge of Washington at the Atlantic Seaboard fall line.
The highest natural elevation in the District is 409 feet (125 m) above sea level at Fort Reno Park in upper northwest Washington. The lowest point is sea level at the Potomac River. The geographic center of Washington is near the intersection of 4th and L Streets NW. Contrary to the urban legend, Washington was not built on a reclaimed swamp, but wetlands did cover areas along the water.
The District has 7,464 acres (30.21 km) of parkland, about 19 % of the city 's total area and the second - highest percentage among high - density U.S. cities. The National Park Service manages most of the 9,122 acres (36.92 km) of city land owned by the U.S. government. Rock Creek Park is a 1,754 - acre (7.10 km) urban forest in Northwest Washington, which extends 9.3 miles (15.0 km) through a stream valley that bisects the city. Established in 1890, it is the country 's fourth - oldest national park and is home to a variety of plant and animal species including raccoon, deer, owls, and coyotes. Other National Park Service properties include the C&O Canal National Historical Park, the National Mall and Memorial Parks, Theodore Roosevelt Island, Columbia Island, Fort Dupont Park, Meridian Hill Park, Kenilworth Park and Aquatic Gardens, and Anacostia Park. The D.C. Department of Parks and Recreation maintains the city 's 900 acres (3.6 km) of athletic fields and playgrounds, 40 swimming pools, and 68 recreation centers. The U.S. Department of Agriculture operates the 446 - acre (1.80 km) U.S. National Arboretum in Northeast Washington.
Washington is in the northern part of the humid subtropical climate zone (Köppen: Cfa) However, under the Trewartha climate classification, the city has a temperate maritime climate (Do). Winters are usually chilly with light snow, and summers are hot and humid. The District is in plant hardiness zone 8a near downtown, and zone 7b elsewhere in the city, indicating a humid subtropical climate.
Spring and fall are mild to warm, while winter is chilly with annual snowfall averaging 15.5 inches (39 cm). Winter temperatures average around 38 ° F (3.3 ° C) from mid-December to mid-February. Summers are hot and humid with a July daily average of 79.8 ° F (26.6 ° C) and average daily relative humidity around 66 %, which can cause moderate personal discomfort. The combination of heat and humidity in the summer brings very frequent thunderstorms, some of which occasionally produce tornadoes in the area.
Blizzards affect Washington on average once every four to six years. The most violent storms are called "nor'easters '', which often affect large sections of the East Coast. From January 27 to 28, 1922, the city officially received 28 inches (71 cm) of snowfall, the largest snowstorm since official measurements began in 1885. According to notes kept at the time, the city received between 30 and 36 inches (76 and 91 cm) from a snowstorm on January 1772.
Hurricanes (or their remnants) occasionally track through the area in late summer and early fall, but are often weak by the time they reach Washington, partly due to the city 's inland location. Flooding of the Potomac River, however, caused by a combination of high tide, storm surge, and runoff, has been known to cause extensive property damage in the neighborhood of Georgetown.
Precipitation occurs throughout the year.
The highest recorded temperature was 106 ° F (41 ° C) on August 6, 1918, and on July 20, 1930. while the lowest recorded temperature was − 15 ° F (− 26 ° C) on February 11, 1899, during the Great Blizzard of 1899. During a typical year, the city averages about 37 days at or above 90 ° F (32.2 ° C) and 64 nights at or below freezing.
Washington, D.C., is a planned city. In 1791, President Washington commissioned Pierre (Peter) Charles L'Enfant, a French - born architect and city planner, to design the new capital. He enlisted Scottish surveyor Alexander Ralston helped layout the city plan. The L'Enfant Plan featured broad streets and avenues radiating out from rectangles, providing room for open space and landscaping. He based his design on plans of cities such as Paris, Amsterdam, Karlsruhe, and Milan that Thomas Jefferson had sent to him. L'Enfant's design also envisioned a garden - lined "grand avenue '' approximately 1 mile (1.6 km) in length and 400 feet (120 m) wide in the area that is now the National Mall.
President Washington dismissed L'Enfant in March 1792 due to conflicts with the three commissioners appointed to supervise the capital 's construction. Andrew Ellicott, who had worked with L'Enfant surveying the city, was then tasked with completing the design. Though Ellicott made revisions to the original plans, including changes to some street patterns, L'Enfant is still credited with the overall design of the city.
By the early 1900s, L'Enfant's vision of a grand national capital had become marred by slums and randomly placed buildings, including a railroad station on the National Mall. Congress formed a special committee charged with beautifying Washington 's ceremonial core. What became known as the McMillan Plan was finalized in 1901 and included re-landscaping the Capitol grounds and the National Mall, clearing slums, and establishing a new citywide park system. The plan is thought to have largely preserved L'Enfant's intended design.
By law, Washington 's skyline is low and sprawling. The federal Heights of Buildings Act of 1910 allows buildings that are no taller than the width of the adjacent street, plus 20 feet (6.1 m). Despite popular belief, no law has ever limited buildings to the height of the United States Capitol or the 555 - foot (169 m) Washington Monument, which remains the District 's tallest structure. City leaders have criticized the height restriction as a primary reason why the District has limited affordable housing and traffic problems caused by urban sprawl.
The District is divided into four quadrants of unequal area: Northwest (NW), Northeast (NE), Southeast (SE), and Southwest (SW). The axes bounding the quadrants radiate from the U.S. Capitol building. All road names include the quadrant abbreviation to indicate their location and house numbers generally correspond with the number of blocks away from the Capitol. Most streets are set out in a grid pattern with east -- west streets named with letters (e.g., C Street SW), north -- south streets with numbers (e.g., 4th Street NW), and diagonal avenues, many of which are named after states.
The City of Washington was bordered by Boundary Street to the north (renamed Florida Avenue in 1890), Rock Creek to the west, and the Anacostia River to the east. Washington 's street grid was extended, where possible, throughout the District starting in 1888. Georgetown 's streets were renamed in 1895. Some streets are particularly noteworthy, such as Pennsylvania Avenue, which connects the White House to the Capitol and K Street, which houses the offices of many lobbying groups. Washington hosts 177 foreign embassies, constituting approximately 297 buildings beyond the more than 1,600 residential properties owned by foreign countries, many of which are on a section of Massachusetts Avenue informally known as Embassy Row.
The architecture of Washington varies greatly. Six of the top 10 buildings in the American Institute of Architects ' 2007 ranking of "America 's Favorite Architecture '' are in the District of Columbia: the White House; the Washington National Cathedral; the Thomas Jefferson Memorial; the United States Capitol; the Lincoln Memorial; and the Vietnam Veterans Memorial. The neoclassical, Georgian, gothic, and modern architectural styles are all reflected among those six structures and many other prominent edifices in Washington. Notable exceptions include buildings constructed in the French Second Empire style such as the Eisenhower Executive Office Building.
Outside downtown Washington, architectural styles are even more varied. Historic buildings are designed primarily in the Queen Anne, Châteauesque, Richardsonian Romanesque, Georgian revival, Beaux - Arts, and a variety of Victorian styles. Rowhouses are especially prominent in areas developed after the Civil War and typically follow Federalist and late Victorian designs. Georgetown 's Old Stone House was built in 1765, making it the oldest - standing original building in the city. Founded in 1789, Georgetown University features a mix of Romanesque and Gothic Revival architecture. The Ronald Reagan Building is the largest building in the District with a total area of approximately 3.1 million square feet (288,000 m).
The U.S. Census Bureau estimates that the District 's population was 681,170 on July 1, 2016, an 13.2 % increase since the 2010 United States Census. The increase continues a growth trend since 2000, following a half - century of population decline. The city was the 24th most populous place in the United States as of 2010. According to data from 2010, commuters from the suburbs increase the District 's daytime population to over one million people. If the District were a state it would rank 49th in population, ahead of Vermont and Wyoming.
The Washington Metropolitan Area, which includes the District and surrounding suburbs, is the sixth - largest metropolitan area in the United States with an estimated 6 million residents in 2014. When the Washington area is included with Baltimore and its suburbs, the Baltimore -- Washington Metropolitan Area had a population exceeding 9.5 million residents in 2014, the fourth - largest combined statistical area in the country.
According to 2016 Census Bureau data, the population of Washington, D.C., was 47.7 % Black or African American, 44.6 % White (36.4 % non-Hispanic White), 4.1 % Asian, 0.6 % American Indian or Alaska Native, and 0.2 % Native Hawaiian or Other Pacific Islander. Individuals from two or more races made up 2.7 % of the population. Hispanics of any race made up 10.9 % of the District 's population.
Washington has had a significant African American population since the city 's foundation. African American residents composed about 30 % of the District 's total population between 1800 and 1940. The black population reached a peak of 70 % by 1970, but has since steadily declined due to many African Americans moving to the surrounding suburbs. Partly as a result of gentrification, there was a 31.4 % increase in the non-Hispanic white population and an 11.5 % decrease in the black population between 2000 and 2010.
About 17 % of D.C. residents were age 18 or younger in 2010; lower than the U.S. average of 24 %. However, at 34 years old, the District had the lowest median age compared to the 50 states. As of 2010, there were an estimated 81,734 immigrants living in Washington, D.C. Major sources of immigration include El Salvador, Vietnam, and Ethiopia, with a concentration of Salvadorans in the Mount Pleasant neighborhood.
Researchers found that there were 4,822 same - sex couples in the District of Columbia in 2010; about 2 % of total households. Legislation authorizing same - sex marriage passed in 2009 and the District began issuing marriage licenses to same - sex couples in March 2010.
A 2007 report found that about one - third of District residents were functionally illiterate, compared to a national rate of about one in five. This is attributed in part to immigrants who are not proficient in English. As of 2011, 85 % of D.C. residents age 5 and older spoke English at home as a primary language. Half of residents had at least a four - year college degree in 2006. D.C. residents had a personal income per capita of $55,755; higher than any of the 50 states. However, 19 % of residents were below the poverty level in 2005, higher than any state except Mississippi.
Of the District 's population, 17 % is Baptist, 13 % is Catholic, 6 % is Evangelical Protestant, 4 % is Methodist, 3 % is Episcopalian / Anglican, 3 % is Jewish, 2 % is Eastern Orthodox, 1 % is Pentecostal, 1 % is Buddhist, 1 % is Adventist, 1 % is Lutheran, 1 % is Muslim, 1 % is Presbyterian, 1 % is Mormon, and 1 % is Hindu.
Over 90 % of D.C. residents have health insurance coverage, the second - highest rate in the nation. This is due in part to city programs that help provide insurance to low - income individuals who do not qualify for other types of coverage. A 2009 report found that at least 3 % of District residents have HIV or AIDS, which the Centers for Disease Control and Prevention (CDC) characterizes as a "generalized and severe '' epidemic.
Crime in Washington, D.C., is concentrated in areas associated with poverty, drug abuse, and gangs. A 2010 study found that 5 % of city blocks accounted for over one - quarter of the District 's total crime. The more affluent neighborhoods of Northwest Washington are typically safe, but reports of violent crime increase in poorer neighborhoods generally concentrated in the eastern portion of the city. Approximately 60,000 residents are ex-convicts.
Washington was often described as the "murder capital '' of the United States during the early 1990s. The number of murders peaked in 1991 at 479, but the level of violence then began to decline significantly. By 2012, Washington 's annual murder count had dropped to 88, the lowest total since 1961. The murder rate has since risen from that historic low, though it remains close to half the rate of the early 2000s. In 2016, the District 's Metropolitan Police Department tallied 135 homicides, a 53 % increase from 2012 but a 17 % decrease from 2015. Many neighborhoods such as Columbia Heights and Logan Circle are becoming safer and vibrant. However, incidents of robberies and thefts have remained higher in these areas because of increased nightlife activity and greater numbers of affluent residents. Even still, citywide reports of both property and violent crimes have declined by nearly half since their most recent highs in the mid-1990s.
On June 26, 2008, the Supreme Court of the United States held in District of Columbia v. Heller that the city 's 1976 handgun ban violated the right to keep and bear arms as protected under the Second Amendment. However, the ruling does not prohibit all forms of gun control; laws requiring firearm registration remain in place, as does the city 's assault weapon ban. In addition to the District 's own Metropolitan Police Department, many federal law enforcement agencies have jurisdiction in the city as well; most visibly the U.S. Park Police, founded in 1791.
Washington has a growing, diversified economy with an increasing percentage of professional and business service jobs. The gross state product of the District in 2010 was $103.3 billion, which would rank it No. 34 compared to the 50 states. The gross product of the Washington Metropolitan Area was $435 billion in 2014, making it the sixth - largest metropolitan economy in the United States. Between 2009 and 2016, GDP per capita in Washington, D.C has consistently ranked on the very top among US states. In 2016, at $160,472, its GDP per capita is almost three times as high as that of Massachusetts, which ranked second place in the country. As of June 2011, the Washington Metropolitan Area had an unemployment rate of 6.2 %; the second - lowest rate among the 49 largest metro areas in the nation. The District of Columbia itself had an unemployment rate of 9.8 % during the same time period.
In 2012, the federal government accounted for about 29 % of the jobs in Washington, D.C. This is thought to immunize Washington to national economic downturns because the federal government continues operations even during recessions. Many organizations such as law firms, independent contractors (both defense and civilian), non-profit organizations, lobbying firms, trade unions, industry trade groups, and professional associations have their headquarters in or near D.C. to be close to the federal government.
Tourism is Washington 's second largest industry. Approximately 18.9 million visitors contributed an estimated $4.8 billion to the local economy in 2012. The District also hosts nearly 200 foreign embassies and international organizations such as the World Bank, the International Monetary Fund (IMF), the Organization of American States, the Inter-American Development Bank, and the Pan American Health Organization. In 2008, the foreign diplomatic corps in Washington employed about 10,000 people and contributed an estimated $400 million annually to the local economy.
The District has growing industries not directly related to government, especially in the areas of education, finance, public policy, and scientific research. Georgetown University, George Washington University, Washington Hospital Center, Children 's National Medical Center and Howard University are the top five non-government - related employers in the city as of 2009. According to statistics compiled in 2011, four of the largest 500 companies in the country were headquartered in the District. In the 2017 Global Financial Centres Index, Washington was ranked as having the 12th most competitive financial center in the world, and fifth most competitive in the United States (after New York City, San Francisco, Chicago, and Boston).
The National Mall is a large, open park in downtown Washington between the Lincoln Memorial and the United States Capitol. Given its prominence, the mall is often the location of political protests, concerts, festivals, and presidential inaugurations. The Washington Monument and the Jefferson Pier are near the center of the mall, south of the White House. Also on the mall are the National World War II Memorial at the east end of the Lincoln Memorial Reflecting Pool, the Korean War Veterans Memorial, and the Vietnam Veterans Memorial.
Directly south of the mall, the Tidal Basin features rows of Japanese cherry blossom trees that originated as gifts from the nation of Japan. The Franklin Delano Roosevelt Memorial, George Mason Memorial, Jefferson Memorial, Martin Luther King Jr. Memorial, and the District of Columbia War Memorial are around the Tidal Basin.
The National Archives houses thousands of documents important to American history including the Declaration of Independence, the United States Constitution, and the Bill of Rights. Located in three buildings on Capitol Hill, the Library of Congress is the largest library complex in the world with a collection of over 147 million books, manuscripts, and other materials. The United States Supreme Court Building was completed in 1935; before then, the court held sessions in the Old Senate Chamber of the Capitol.
The Smithsonian Institution is an educational foundation chartered by Congress in 1846 that maintains most of the nation 's official museums and galleries in Washington, D.C. The U.S. government partially funds the Smithsonian and its collections open to the public free of charge. The Smithsonian 's locations had a combined total of 30 million visits in 2013. The most visited museum is the National Museum of Natural History on the National Mall. Other Smithsonian Institution museums and galleries on the mall are: the National Air and Space Museum; the National Museum of African Art; the National Museum of American History; the National Museum of the American Indian; the Sackler and Freer galleries, which both focus on Asian art and culture; the Hirshhorn Museum and Sculpture Garden; the Arts and Industries Building; the S. Dillon Ripley Center; and the Smithsonian Institution Building (also known as "The Castle ''), which serves as the institution 's headquarters.
The Smithsonian American Art Museum and the National Portrait Gallery are housed in the Old Patent Office Building, near Washington 's Chinatown. The Renwick Gallery is officially part of the Smithsonian American Art Museum but is in a separate building near the White House. Other Smithsonian museums and galleries include: the Anacostia Community Museum in Southeast Washington; the National Postal Museum near Union Station; and the National Zoo in Woodley Park.
The National Gallery of Art is on the National Mall near the Capitol and features works of American and European art. The gallery and its collections are owned by the U.S. government but are not a part of the Smithsonian Institution. The National Building Museum, which occupies the former Pension Building near Judiciary Square, was chartered by Congress and hosts exhibits on architecture, urban planning, and design.
There are many private art museums in the District of Columbia, which house major collections and exhibits open to the public such as the National Museum of Women in the Arts; the Corcoran Gallery of Art, the largest private museum in Washington; and The Phillips Collection in Dupont Circle, the first museum of modern art in the United States. Other private museums in Washington include the Newseum, the O Street Museum Foundation, the International Spy Museum, the National Geographic Society Museum, and the Marian Koshland Science Museum. The United States Holocaust Memorial Museum near the National Mall maintains exhibits, documentation, and artifacts related to the Holocaust.
Washington, D.C., is a national center for the arts. The John F. Kennedy Center for the Performing Arts is home to the National Symphony Orchestra, the Washington National Opera, and the Washington Ballet. The Kennedy Center Honors are awarded each year to those in the performing arts who have contributed greatly to the cultural life of the United States. The historic Ford 's Theatre, site of the assassination of President Abraham Lincoln, continues to operate as a functioning performance space as well as museum.
The Marine Barracks near Capitol Hill houses the United States Marine Band; founded in 1798, it is the country 's oldest professional musical organization. American march composer and Washington - native John Philip Sousa led the Marine Band from 1880 until 1892. Founded in 1925, the United States Navy Band has its headquarters at the Washington Navy Yard and performs at official events and public concerts around the city.
Washington has a strong local theater tradition. Founded in 1950, Arena Stage achieved national attention and spurred growth in the city 's independent theater movement that now includes organizations such as the Shakespeare Theatre Company, Woolly Mammoth Theatre Company, and the Studio Theatre. Arena Stage opened its newly renovated home in the city 's emerging Southwest waterfront area in 2010. The GALA Hispanic Theatre, now housed in the historic Tivoli Theatre in Columbia Heights, was founded in 1976 and is a National Center for the Latino Performing Arts.
The U Street Corridor in Northwest D.C., known as "Washington 's Black Broadway '', is home to institutions like the Howard Theatre, Bohemian Caverns, and the Lincoln Theatre, which hosted music legends such as Washington - native Duke Ellington, John Coltrane, and Miles Davis. Washington has its own native music genre called go - go; a post-funk, percussion - driven flavor of rhythm and blues that was popularized in the late 1970s by D.C. band leader Chuck Brown.
The District is an important center for indie culture and music in the United States. The label Dischord Records, formed by Ian MacKaye, was one of the most crucial independent labels in the genesis of 1980s punk and eventually indie rock in the 1990s. Modern alternative and indie music venues like The Black Cat and the 9: 30 Club bring popular acts to the U Street area.
Washington is one of 13 cities in the United States with teams from all four major professional men 's sports and is home to one major professional women 's team. The Washington Wizards (National Basketball Association), the Washington Capitals (National Hockey League), and the Washington Mystics (Women 's National Basketball Association), play at the Capital One Arena in Chinatown. Nationals Park, which opened in Southeast D.C. in 2008, is home to the Washington Nationals (Major League Baseball). D.C. United (Major League Soccer) plays at RFK Stadium. The Washington Redskins (National Football League) play at FedExField in nearby Landover, Maryland.
Current D.C. teams have won a combined ten professional league championships: the Washington Redskins have won five; D.C. United has won four; and the Washington Wizards (then the Washington Bullets) have won a single championship.
Other professional and semi-professional teams in Washington include: the Washington Kastles (World TeamTennis); the Washington D.C. Slayers (USA Rugby League); the Baltimore Washington Eagles (U.S. Australian Football League); the D.C. Divas (Independent Women 's Football League); and the Potomac Athletic Club RFC (Rugby Super League). The William H.G. FitzGerald Tennis Center in Rock Creek Park hosts the Citi Open. Washington is also home to two major annual marathon races: the Marine Corps Marathon, which is held every autumn, and the Rock ' n ' Roll USA Marathon held in the spring. The Marine Corps Marathon began in 1976 and is sometimes called "The People 's Marathon '' because it is the largest marathon that does not offer prize money to participants.
The District 's four NCAA Division I teams, American Eagles, George Washington Colonials, Georgetown Hoyas and Howard Bison and Lady Bison, have a broad following. The Georgetown Hoyas men 's basketball team is the most notable and also plays at the Capital One Arena. From 2008 to 2012, the District hosted an annual college football bowl game at RFK Stadium, called the Military Bowl. The D.C. area is home to one regional sports television network, Comcast SportsNet (CSN), based in Bethesda, Maryland.
Washington, D.C., is a prominent center for national and international media. The Washington Post, founded in 1877, is the oldest and most - read local daily newspaper in Washington. It is probably most notable for its coverage of national and international politics and for exposing the Watergate scandal. "The Post '', as it is popularly called, had the sixth - highest readership of all news dailies in the country in 2011. The Washington Post Company also publishes a daily free commuter newspaper called the Express, which summarizes events, sports and entertainment, as well as the Spanish - language paper El Tiempo Latino.
Another popular local daily is The Washington Times, the city 's second general interest broadsheet and also an influential paper in political circles. The alternative weekly Washington City Paper also have substantial readership in the Washington area.
Some community and specialty papers focus on neighborhood and cultural issues, including the weekly Washington Blade and Metro Weekly, which focus on LGBT issues; the Washington Informer and The Washington Afro American, which highlight topics of interest to the black community; and neighborhood newspapers published by The Current Newspapers. Congressional Quarterly, The Hill, Politico and Roll Call newspapers focus exclusively on issues related to Congress and the federal government. Other publications based in Washington include the National Geographic magazine and political publications such as The Washington Examiner, The New Republic and Washington Monthly.
The Washington Metropolitan Area is the ninth - largest television media market in the nation, with two million homes, approximately 2 % of the country 's population. Several media companies and cable television channels have their headquarters in the area, including C - SPAN; Black Entertainment Television (BET); Radio One; the National Geographic Channel; Smithsonian Networks; National Public Radio (NPR); Travel Channel (in Chevy Chase, Maryland); Discovery Communications (in Silver Spring, Maryland); and the Public Broadcasting Service (PBS) (in Arlington, Virginia). The headquarters of Voice of America, the U.S. government 's international news service, is near the Capitol in Southwest Washington.
Article One, Section Eight of the United States Constitution grants the United States Congress "exclusive jurisdiction '' over the city. The District did not have an elected local government until the passage of the 1973 Home Rule Act. The Act devolved certain Congressional powers to an elected mayor, currently Muriel Bowser, and the thirteen - member Council of the District of Columbia. However, Congress retains the right to review and overturn laws created by the council and intervene in local affairs.
Each of the city 's eight wards elects a single member of the council and residents elect four at - large members to represent the District as a whole. The council chair is also elected at - large. There are 37 Advisory Neighborhood Commissions (ANCs) elected by small neighborhood districts. ANCs can issue recommendations on all issues that affect residents; government agencies take their advice under careful consideration. The Attorney General of the District of Columbia, currently Karl Racine, is elected to a four - year term.
Washington, D.C., observes all federal holidays and also celebrates Emancipation Day on April 16, which commemorates the end of slavery in the District. The flag of Washington, D.C., was adopted in 1938 and is a variation on George Washington 's family coat of arms.
The mayor and council set local taxes and a budget, which must be approved by Congress. The Government Accountability Office and other analysts have estimated that the city 's high percentage of tax - exempt property and the Congressional prohibition of commuter taxes create a structural deficit in the District 's local budget of anywhere between $470 million and over $1 billion per year. Congress typically provides additional grants for federal programs such as Medicaid and the operation of the local justice system; however, analysts claim that the payments do not fully resolve the imbalance.
The city 's local government, particularly during the mayoralty of Marion Barry, was criticized for mismanagement and waste. During his administration in 1989, The Washington Monthly magazine claimed that the District had "the worst city government in America. '' In 1995, at the start of Barry 's fourth term, Congress created the District of Columbia Financial Control Board to oversee all municipal spending. Mayor Anthony Williams won election in 1998 and oversaw a period of urban renewal and budget surpluses. The District regained control over its finances in 2001 and the oversight board 's operations were suspended.
The District is not a state and therefore has no voting representation in the Congress. D.C. residents elect a non-voting delegate to the House of Representatives, currently Eleanor Holmes Norton (D - D.C. At - Large), who may sit on committees, participate in debate, and introduce legislation, but can not vote on the House floor. The District has no official representation in the United States Senate. Neither chamber seats the District 's elected "shadow '' representative or senators. Unlike residents of U.S. territories such as Puerto Rico or Guam, which also have non-voting delegates, D.C., residents are subject to all federal taxes. In the financial year 2012, D.C., residents and businesses paid $20.7 billion in federal taxes; more than the taxes collected from 19 states and the highest federal taxes per capita.
A 2005 poll found that 78 % of Americans did not know that residents of the District of Columbia have less representation in Congress than residents of the 50 states. Efforts to raise awareness about the issue have included campaigns by grassroots organizations and featuring the city 's unofficial motto, "Taxation Without Representation '', on D.C. vehicle license plates. There is evidence of nationwide approval for D.C. voting rights; various polls indicate that 61 to 82 % of Americans believe that D.C. should have voting representation in Congress. Despite public support, attempts to grant the District voting representation, including the D.C. statehood movement and the proposed District of Columbia Voting Rights Amendment, have been unsuccessful.
Opponents of D.C. voting rights propose that the Founding Fathers never intended for District residents to have a vote in Congress since the Constitution makes clear that representation must come from the states. Those opposed to making D.C. a state claim that such a move would destroy the notion of a separate national capital and that statehood would unfairly grant Senate representation to a single city.
Washington, D.C., has fourteen official sister city agreements. Listed in the order each agreement was first established, they are: Bangkok, Thailand (1962, renewed 2002); Dakar, Senegal (1980, renewed 2006); Beijing, China (1984, renewed 2004); Brussels, Belgium (1985, renewed 2002); Athens, Greece (2000); Paris, France (2000 as a friendship and cooperation agreement, renewed 2005); Pretoria, South Africa (2002, renewed 2008); Seoul, South Korea (2006); Accra, Ghana (2006); Sunderland, United Kingdom (2006); Rome, Italy (2011); Ankara, Turkey (2011); Brasília, Brazil (2013); and Addis Ababa, Ethiopia (2013). Each of the listed cities is a national capital except for Sunderland, which includes the town of Washington, the ancestral home of George Washington 's family. Paris and Rome are each formally recognized as a "partner city '' due to their special one sister city policy.
District of Columbia Public Schools (DCPS) operates the city 's 123 public schools. The number of students in DCPS steadily decreased for 39 years until 2009. In the 2010 -- 11 school year, 46,191 students were enrolled in the public school system. DCPS has one of the highest - cost yet lowest - performing school systems in the country, both in terms of infrastructure and student achievement. Mayor Adrian Fenty 's administration made sweeping changes to the system by closing schools, replacing teachers, firing principals, and using private education firms to aid curriculum development.
The District of Columbia Public Charter School Board monitors the 52 public charter schools in the city. Due to the perceived problems with the traditional public school system, enrollment in public charter schools has steadily increased. As of fall 2010, D.C., charter schools had a total enrollment of about 32,000, a 9 % increase from the prior year. The District is also home to 92 private schools, which enrolled approximately 18,000 students in 2008. The District of Columbia Public Library operates 25 neighborhood locations including the landmark Martin Luther King Jr. Memorial Library.
Private universities include American University (AU), the Catholic University of America (CUA), Gallaudet University, George Washington University (GW), Georgetown University (GU), Howard University, the Johns Hopkins University School of Advanced International Studies (SAIS), and Trinity Washington University. The Corcoran College of Art and Design provides specialized arts instruction and other higher - education institutions offer continuing, distance and adult education. The University of the District of Columbia (UDC) is a public university providing undergraduate and graduate education. D.C. residents may also be eligible for a grant of up to $10,000 per year to offset the cost of tuition at any public university in the country.
The District is known for its medical research institutions such as Washington Hospital Center and the Children 's National Medical Center, as well as the National Institutes of Health in Bethesda, Maryland. In addition, the city is home to three medical schools and associated teaching hospitals at George Washington, Georgetown, and Howard universities.
There are 1,500 miles (2,400 km) of streets, parkways, and avenues in the District. Due to the freeway revolts of the 1960s, much of the proposed interstate highway system through the middle of Washington was never built. Interstate 95 (I - 95), the nation 's major east coast highway, therefore bends around the District to form the eastern portion of the Capital Beltway. A portion of the proposed highway funding was directed to the region 's public transportation infrastructure instead. The interstate highways that continue into Washington, including I - 66 and I - 395, both terminate shortly after entering the city.
The Washington Metropolitan Area Transit Authority (WMATA) operates the Washington Metro, the city 's rapid transit system, as well as Metrobus. Both systems serve the District and its suburbs. Metro opened on March 27, 1976 and, as of July 2014, consists of 91 stations and 117 miles (188 km) of track. With an average of about one million trips each weekday, Metro is the second - busiest rapid transit system in the country. Metrobus serves over 400,000 riders each weekday and is the nation 's fifth - largest bus system. The city also operates its own DC Circulator bus system, which connects commercial areas within central Washington.
Union Station is the city 's main train station and services approximately 70,000 people each day. It is Amtrak 's second - busiest station with 4.6 million passengers annually and is the southern terminus for the Northeast Corridor and Acela Express routes. Maryland 's MARC and Virginia 's VRE commuter trains and the Metrorail Red Line also provide service into Union Station. Following renovations in 2011, Union Station became Washington 's primary intercity bus transit center.
Three major airports serve the District. Ronald Reagan Washington National Airport is across the Potomac River from downtown Washington in Arlington, Virginia and primarily handles domestic flights. Major international flights arrive and depart from Washington Dulles International Airport, 26.3 miles (42.3 km) west of the District in Fairfax and Loudoun counties in Virginia. Baltimore - Washington International Thurgood Marshall Airport is 31.7 miles (51.0 km) northeast of the District in Anne Arundel County, Maryland.
According to a 2010 study, Washington - area commuters spent 70 hours a year in traffic delays, which tied with Chicago for having the nation 's worst road congestion. However, 37 % of Washington - area commuters take public transportation to work, the second - highest rate in the country. An additional 12 % of D.C. commuters walked to work, 6 % carpooled, and 3 % traveled by bicycle in 2010. A 2011 study by Walk Score found that Washington was the seventh-most walkable city in the country with 80 % of residents living in neighborhoods that are not car dependent.
An expected 32 % increase in transit usage within the District by 2030 has spurred construction of a new DC Streetcar system to interconnect the city 's neighborhoods. Construction has also started on an additional Metro line that will connect Washington to Dulles airport. The District is part of the regional Capital Bikeshare program. Started in 2010, it is currently one of the largest bicycle sharing systems in the country with over 4,351 bicycles and more than 395 stations all provided by PBSC Urban Solutions. By 2012, the city 's network of marked bicycle lanes covered 56 miles (90 km) of streets.
The District of Columbia Water and Sewer Authority (i.e. WASA or D.C. Water) is an independent authority of the D.C. government that provides drinking water and wastewater collection in Washington. WASA purchases water from the historic Washington Aqueduct, which is operated by the Army Corps of Engineers. The water, sourced from the Potomac River, is treated and stored in the city 's Dalecarlia, Georgetown, and McMillan reservoirs. The aqueduct provides drinking water for a total of 1.1 million people in the District and Virginia, including Arlington, Falls Church, and a portion of Fairfax County. The authority also provides sewage treatment services for an additional 1.6 million people in four surrounding Maryland and Virginia counties.
Pepco is the city 's electric utility and services 793,000 customers in the District and suburban Maryland. An 1889 law prohibits overhead wires within much of the historic City of Washington. As a result, all power lines and telecommunication cables are located underground in downtown Washington, and traffic signals are placed at the edge of the street. A plan announced in 2013 would bury an additional 60 miles (97 km) of primary power lines throughout the District.
Washington Gas is the city 's natural gas utility and serves over one million customers in the District and its suburbs. Incorporated by Congress in 1848, the company installed the city 's first gas lights in the Capitol, the White House, and along Pennsylvania Avenue.
|
which of the following is an example of kinetic energy chemical energy | Chemical energy - wikipedia
In chemistry, chemical energy is the potential of a chemical substance to undergo a transformation through a chemical reaction to transform other chemical substances. Examples include batteries, food, gasoline, and more. Breaking or making of chemical bonds involves energy, which may be either absorbed or evolved from a chemical system. A very common misconception is that energy is released when bonds are broken, whereas energy is required to break bonds.
Energy that can be released (or absorbed) because of a reaction between a set of chemical substances is equal to the difference between the energy content of the products and the reactants, if the initial and final temperatures are the same. This change in energy can be estimated from the bond energies of the various chemical bonds in the reactants and products. It can also be calculated from Δ U f ∘ r e a c t a n t s (\ displaystyle \ Delta (U_ (f) ^ (\ circ)) _ (\ mathrm (reactants))), the internal energy of formation of the reactant molecules, and Δ U f ∘ p r o d u c t s (\ displaystyle \ Delta (U_ (f) ^ (\ circ)) _ (\ mathrm (products))) the internal energy of formation of the product molecules. The internal energy change of a chemical process is equal to the heat exchanged if it is measured under conditions of constant volume and equal initial and final temperature, as in a closed container such as a bomb calorimeter. However, under conditions of constant pressure, as in reactions in vessels open to the atmosphere, the measured heat change is not always equal to the internal energy change, because pressure - volume work also releases or absorbs energy. (The heat change at constant pressure is called the enthalpy change; in this case the enthalpy of reaction, if initial and final temperatures are equal).
Another useful term is the heat of combustion, which is the energy mostly of the weak double bonds of molecular oxygen released due to a combustion reaction and often applied in the study of fuels. Food is similar to hydrocarbon and carbohydrate fuels, and when it is oxidized to carbon dioxide and water, the energy released is analogous to the heat of combustion (though not assessed in the same way as a hydrocarbon fuel -- see food energy).
Chemical potential energy is a form of potential energy related to the structural arrangement of atoms or molecules. This arrangement may be the result of chemical bonds within a molecule or otherwise. Chemical energy of a chemical substance can be transformed to other forms of energy by a chemical reaction. As an example, when a fuel is burned the chemical energy of molecular oxygen is converted to heat, and the same is the case with digestion of food metabolized in a biological organism. Green plants transform solar energy to chemical energy (mostly of oxygen) through the process known as photosynthesis, and electrical energy can be converted to chemical energy and vice versa through electrochemical reactions.
The similar term chemical potential is used to indicate the potential of a substance to undergo a change of configuration, be it in the form of a chemical reaction, spatial transport, particle exchange with a reservoir, etc. It is not a form of potential energy itself, but is more closely related to free energy. The confusion in terminology arises from the fact that in other areas of physics not dominated by entropy, all potential energy is available to do useful work and drives the system to spontaneously undergo changes of configuration, and thus there is no distinction between "free '' and "non-free '' potential energy (hence the one word "potential ''). However, in systems of large entropy such as chemical systems, the total amount of energy present (and conserved by the first law of thermodynamics) of which this Chemical Potential Energy is a part, is separated from the amount of that energy -- Thermodynamic Free Energy (which Chemical potential is derived from) -- which (appears to) drive the system forward spontaneously as its entropy increases (in accordance with the second law).
|
where does the late show with stephen colbert tape | Ed Sullivan theater - wikipedia
The Ed Sullivan Theater is a theater located at 1697 -- 1699 Broadway, between West 53rd and West 54th, in the Theater District in Manhattan, New York City. The theater has been used as a venue for live and taped CBS broadcasts since 1936.
It is historically known as the home of The Ed Sullivan Show and the site of The Beatles ' US debut performance and before the appearance of Elvis Presley. It also housed David Letterman 's tenure of CBS ' Late Show from 1993 to 2015. The theatre currently houses The Late Show with Stephen Colbert, the second incarnation of the Late Show franchise. It is on the National Register of Historic Places, and the interior has been designated a landmark by the New York City Landmarks Preservation Commission.
The 13 - story, brown brick and terra cotta office building with a ground - floor theater was designed by architect Herbert J. Krapp. It was built by Arthur Hammerstein between 1925 and 1927, and was named Hammerstein 's Theatre after his father, Oscar Hammerstein I. The original neo-Gothic interior contained pointed - arch stained - glass windows with scenes from the elder Hammerstein 's operas. Its first production was the three - hour musical Golden Dawn, the second male lead of which was Cary Grant, then still using his birth name, Archie Leach. Arthur Hammerstein went bankrupt in 1931, and lost ownership of the building.
It later went by the name Manhattan Theatre, Billy Rose 's Music Hall, and the Manhattan once again. In the 1930s, it became a nightclub. After CBS obtained a long - term lease on the property, the radio network began broadcasting from there in 1936, moving in broadcast facilities it had leased at NBC Studios in Radio City. Architect William Lescaze renovated the interior, keeping nearly all of the Krapp design but covering many walls with smooth white panels, his work earning praise from the magazine Architectural Forum. The debut broadcast was the Major Bowes Amateur Hour. The theater had various names during the network 's tenancy, including Radio Theater # 3 and the CBS Radio Playhouse. It was converted for television in 1950, when it became CBS - TV Studio 50. In the early and mid-Fifties, the theater played host to many of the live telecasts of The Jackie Gleason Show.
Newspaper columnist and impresario Ed Sullivan, who had started hosting his variety show Toast of the Town, soon renamed The Ed Sullivan Show, from the Maxine Elliott Theatre (CBS Studio 51) on West 39th Street in 1948, moved to Studio 50 a few years later. The theater was officially renamed for Sullivan at the end of his "20th Anniversary Celebration '' telecast on December 10, 1967.
In the 1960s, Studio 50 was one of CBS ' busiest stages, not only for Sullivan 's program but also for The Merv Griffin Show, as well as several game shows. In 1965, Studio 50 was converted to color, and the first color episode of The Ed Sullivan Show originated from the theater on October 31, 1965. (The program originated from CBS Television City in color for the previous six weeks while the color equipment was installed. One earlier color episode of the program originated from Studio 72 at Broadway and 81st on August 22, 1954.) What 's My Line?, To Tell the Truth and Password also called the studio home after CBS began broadcasting regularly in color; previously, they had been taped around the corner at CBS - TV Studio 52, which later became the disco Studio 54. The first episode of regular color telecasts of What 's My Line? was broadcast live on September 11, 1966. Line and Truth remained at Studio 50 even after they moved from CBS to first - run syndication in the late 1960s and early 1970s.
The Ed Sullivan Theater was also the first home for The $10,000 Pyramid, with its huge end - game board set at the rear of the stage, in 1973. Other short - lived game shows produced at the Ed included Musical Chairs with singer Adam Wade (1975), Shoot For The Stars with Geoff Edwards (1977) (which was an NBC show), and Pass the Buck with Bill Cullen (1978).
The CBS lease on the building expired in 1981 and it became a Reeves Entertainment teletape facility. As such it hosted the sitcom Kate & Allie, which ran from 1984 to 1989 (as it happened, on CBS), as well as the early Nickelodeon talk show Livewire. In 1990, David Niles / 1125 Productions signed onto the lease, with the theater to house his HDTV studio and new Broadway show Dreamtime. On October 17, 1992, an NBC special celebrating Phil Donahue 's 25 years on television taped in the theater. The following month, NBC News used the theater for its November 1992 election night coverage.
When David Letterman switched networks from NBC to CBS, CBS bought the theater in February 1993 from Winthrop Financial Associates of Boston for $4.5 million, as the broadcast location for his new show, Late Show with David Letterman. The existing tenant, Niles ' Dreamtime, was given four weeks to vacate. Due to the economics of moving the show and the lack of a comparable available Broadway theater, Dreamtime closed. The quick sale and vacancy of the building earned the realtor the Henry Hart Rice Achievement Award for the Most Ingenious Deal of the Year for 1993.
The theater was reconfigured into a studio, with lighting and sound adjustments; the number of seats was reduced from 1,200 to 400. During the renovation the stained glass windows were removed and stored by CBS in an arrangement with the New York City Landmarks Preservation Commission; the window openings were covered with acoustic material. The architectural firm that did the work, Polshek Partnership, notes on its web site that "to preserve the architectural integrity of the landmark, all interventions are reversible. ''
In 2005, it took nearly four months to retrofit the theater with the cabling and equipment necessary to broadcast high definition television.
Letterman 's production company Worldwide Pants had its offices in the theater 's office building from 1993 until shortly after the conclusion of Letterman hosting Late Show in 2015.
Letterman 's successor, Stephen Colbert, continues to broadcast The Late Show with Stephen Colbert from the Ed Sullivan Theater, although extensive renovations were made between the two hosts ' tenures. Removal of the Letterman set took place only a few hours after his last show, on May 20, 2015. Letterman 's marquee was also removed, and was temporarily replaced by a banner promoting the Angelo 's Pizza restaurant adjacent to the theater, featuring Colbert posing with a slice of pizza.
The theater underwent a full restoration to its original 1927 splendor, including the exposure of the theater 's dome, which had been covered up by air ducts and sound buffers, the re-installation of the original stained - glass windows, which had been removed and placed in storage during the Letterman era, and the restoration of a wooden chandelier with individual stained - glass chambers that house its bulbs. The restoration was made possible due to advances in technology that allowed less sound and video equipment to cover up the auditorium 's architectural details. CBS executive Richard Hart explained that Colbert was initially hesitant to use the theater, but called for the restoration after he was informed about the dome while touring the facility.
Colbert described his new set as being "intimate ''; it features a multi-tier design, with extensive use of LED lighting and video projection backdrops, and a larger desk area than that of Letterman. Exposed for the new show, the Sullivan 's dome is lit up with a digital projection system which is used to display images above the theater, such as a kaleidoscopic pattern featuring images of Colbert 's face and the CBS logo. New, larger audience seats were installed, reducing the overall capacity to 370 from 461. The theater 's new marquee was designed to have a "glitzy '' appearance appropriate for Broadway; CBS late - night executive Vincent Favale joked that Colbert 's marquee made one installed at 30 Rockefeller Center for The Tonight Show Starring Jimmy Fallon look like a mall kiosk in comparison.
The theatre served as a stage for The Rosie O'Donnell Show for a week of shows in October 1996 when several eighth - floor studios at NBC 's 30 Rockefeller Center headquarters experienced complications from an electrical fire.
The theatre has hosted most of the New York - based finales for the reality game show Survivor. The Ed Sullivan Theater was first used for Survivor: The Amazon (following bad weather cancelling an outdoor arena finale from central park) and was subsequently used for every even - numbered season from Survivor: Palau to Survivor: One World.
In the 21st century, the theater has hosted roof - top or marquee - top concerts by a few musicians:
On February 9, 2014, the 50th anniversary of the Beatles ' first Ed Sullivan performance, CBS News hosted a roundtable discussion at the theater. Anthony Mason moderated the panel, which consisted of Pattie Boyd, Neil Innes, Mick Jones, Tad Kubler, John Oates, Andrew Oldham, Nile Rodgers and Julie Taymor. A replica of the marquee to the theater as it looked the night of the original performance also covered up the Late Show with David Letterman marquee over the weekend. David Letterman interviewed Paul McCartney and Ringo Starr in the theater as part of a related Grammy tribute special which aired on CBS around the same time.
|
who plays donna on bold and the beautiful | Jennifer Gareis - Wikipedia
Jennifer Gareis (born August 1, 1970) is an American actress and former beauty queen. She is best known for her roles as Grace Turner on The Young and the Restless (1997 -- 2004, 2014) and as Donna Logan on The Bold and the Beautiful (2006 -- 2015, 2016, 2017, 2018).
Gareis was born in Lancaster, Pennsylvania, and graduated from J.P. McCaskey High School in 1988. She graduated from Franklin and Marshall College in 1993 with a Bachelor of Science degree in accounting, then earned a Master 's of Business Administration degree from Pepperdine University. She is of part Italian descent. Her great - grandmother Sebastiana Tringali came from Militello in Val di Catania, and the city gave her the honorary citizenship.
Gareis competed in her first beauty pageant in 1992 when she placed second runner - up at Miss Pennsylvania USA. She later competed in New York, winning the Miss New York USA 1994 title, and representing New York in the Miss USA 1994 pageant held in South Padre Island, Texas on February 11, 1994. Gareis placed in the top six of the nationally televised pageant, which was won by Lu Parker of South Carolina.
Gareis later began an acting career and landed the role of Grace Turner on CBS soap opera The Young and the Restless, which she played from 1997 to 2000 and again in 2001, 2002, 2004 and 2014. She is best known for her current role as Donna Logan on The Bold and the Beautiful, a role she has held since July 2006. She was taken off contract in late 2014 and her last appearance in the show was on February 18, 2015. She later returned for a few episodes in October 2016, in December 2017 and in February 2018. She was ranked # 90 on the Maxim Hot 100 Women of 2002.
Gareis married Bobby Ghassemieh on March 7, 2010. On June 11, 2010, Gareis gave birth to a son, Gavin Blaze Gareis Ghassemieh. Daughter Sophia Rose Gareis Ghassemieh was born June 29, 2012.
|
when does the new episode of riverdale come out | Riverdale (2017 TV series) - Wikipedia
Riverdale is an American teen drama television series based on the characters by Archie Comics. The series premiered on January 26, 2017, on The CW. It was adapted for television by Archie Comics ' chief creative officer Roberto Aguirre - Sacasa and executive produced by Greg Berlanti. On March 7, 2017, The CW renewed the series for a second season, which premiered on October 11, 2017. In September 2017, a spin - off series, titled The Chilling Adventures of Sabrina, was revealed to be in development.
The show features an ensemble cast based on the characters of Archie Comics, with KJ Apa in the role of Archie Andrews; Lili Reinhart as Betty Cooper, his next door neighbor who has a crush on him; Camila Mendes as Veronica Lodge, his new love interest; and Cole Sprouse as Jughead Jones, his ex-best friend and the narrator of the show. The show also features Ashleigh Murray as Josie McCoy, the lead singer of the Pussycats, and Madelaine Petsch as Cheryl Blossom, the twin sister of Jason Blossom, who is at the center of the series ' mystery. Other characters in the show include Fred Andrews, Alice Cooper, FP Jones, and Hermione Lodge, the parents of Archie, Betty, Jughead, and Veronica respectively.
The series follows Archie Andrews ' life in the small town of Riverdale and explores the darkness hidden behind its seemingly perfect image.
Warner Bros. began development on an Archie feature film in 2013, after a pitch from writer Roberto Aguirre - Sacasa and director Jason Moore that would place Archie 's gang into a teen comedy feature film in the John Hughes tradition. Dan Lin and Roy Lee became producers on the project, which eventually stalled as priorities shifted at Warner Bros. towards larger tentpole films and was reimagined as a television series. Riverdale was originally in development at Fox, with the network landing the project in 2014 with a script deal plus penalty. However, Fox did not go forward with the project. In 2015, the show 's development was moved to The CW, which officially ordered a pilot on January 29, 2016. On March 7, 2017, The CW announced that the series had been renewed for a second season.
Casting Archie was a difficult process, with Aguirre - Sacasa stating "I think we literally saw every redheaded young guy in L.A. It certainly felt that way. '' The production team found KJ Apa just three days before they had to present screen tests to the network, which created tension in the last few days leading up to the studio presentation.
In April 2017, it was announced Mark Consuelos had signed on for the second season to play Veronica Lodge 's father, Hiram Lodge. The role was in second position to his existing role on Pitch but the cancellation of that series was announced on May 1, 2017. The next month, it was announced Charles Melton was cast to take over the role of Reggie from Ross Butler in season 2 due to his status as a series regular on 13 Reasons Why. It was also announced that Casey Cott was promoted to a series regular. In July 2017, it was announced that True Blood star Brit Morgan had been cast in the recurring role of Penny Peabody, an attorney the Southside Serpents call in case of any run - ins with the law. In August 2017, it was announced Graham Phillips had been cast to play Nick St. Clair, Veronica 's ex-boyfriend from New York.
Filming of the pilot began on March 14 and ended on April 1, in Vancouver, British Columbia. Production on the remaining 12 episodes of season one began on September 7 in Vancouver. Sets include Pop Tate 's Chock'lit Shoppe, a copy of the functioning diner used in the pilot that is so realistic a truck driver parked his 18 - wheeler there, believing that it was open. Production of season two will return to Vancouver and the Fraser Valley. Principal photography began on June 22, 2017.
Netflix acquired the exclusive international broadcast rights to Riverdale, making the series available as an original series to its platform less than a day after its original U.S. broadcast.
In July 2016, members of the cast and the executive producers attended San Diego Comic - Con to promote the upcoming series, where they premiered the first episode "Chapter One: The River 's Edge ''. The first trailer for the series was released in early December 2016, while additional teasers followed later that month and into 2017. The CW also sponsored multiple Tastemade videos, where they cooked several foods that are popular in the Archie universe.
Along with heavily promoting the TV series in their regular comics since January 2017, Archie Comics are planning to release a comic book adaptation of Riverdale, featuring auxiliary story arcs set within the television series ' own continuity. The comic book adaptation is being headed by Roberto Aguirre - Sacasa himself, along with various other writers from the show. Alongside a one - shot pilot issue, illustrated by Alitha Martinez, released in March 2017, the first issue of the on - going Riverdale the comic book series was set to release starting April 2017.
In addition to the adaptation, Archie Comics are releasing a series of compilation graphic novels branded under the title Road to Riverdale. This series features early issues from the New Riverdale reboot line, introducing the audience of the TV series to the regular on - going comic series that inspired it. Archie Comics plans to re-print the volumes of Road to Riverdale in subsequent months as digest magazines. The first volume was released in March 2017.
The first season of Riverdale has received generally positive reviews from critics. On Rotten Tomatoes, it has a fresh rating of 87 % based on 53 reviews, with a weighted average of 7.27 / 10. The site 's critical consensus reads, "Riverdale offers an amusingly self - aware reimagining of its classic source material that proves eerie, odd, daring, and above all addictive. '' On Metacritic, the season has a score of 67 out of 100 based on 35 critics, indicating "generally favorable reviews ''. Dave Nemetz of TVLine gave the series a "B + '' saying that it turned, "out to be an artfully crafted, instantly engaging teen soap with loads of potential. ''
Some writers have criticized the series for its handling of minority characters. While reviewing the first season, Kadeen Griffiths of Bustle declared "the show marginalizes and ignores the (people of color) in the cast to the point where they may as well not be there. '' In an article for Vulture, Angelica Jade Bastien discussed the show 's treatment of Josie and the Pussycats (who are each played by African Americans), noting, "They 're not characters so much as they are a vehicle for a Message. Josie and her fellow pussycats are positioned to communicate the message that Riverdale is more modern and inclusive than teen dramas of the past, even though it has yet to prove it beyond its casting. '' Monique Jones of Ebony noted, "Despite the show 's multi-racial casting choices, it seems like Riverdale is still a mostly white town. '' She also expressed fondness for the relationship between Archie Andrews and Valerie Brown, but declared "Archie should n't be what makes Valerie interesting to us ''.
In September 2017, it was reported that a live - action television series, The Chilling Adventures of Sabrina, was being developed for The CW by Warner Bros. Television and Berlanti Productions, with a planned release in the 2018 -- 2019 television season. Based on the comic series of the same name, featuring the Archie Comics character Sabrina the Teenage Witch, the series would be a companion series to Riverdale. Lee Toland Krieger will direct the pilot, which will be written by Aguirre - Sacasa. Both are executive producers along with Berlanti, Schechter, and Goldwater.
|
write a short paragraph that explains ethnomusicology and identifies all phases of the discipline | Ethnomusicology - wikipedia
Ethnomusicology is the study of music from the cultural and social aspects of the people who make it. It encompasses distinct theoretical and methodical approaches that emphasize cultural, social, material, cognitive, biological, and other dimensions or contexts of musical behavior, instead of only its isolated sound component.
The term ethnomusicology is said to have been first coined by Jaap Kunst from the Greek words ἔθνος (ethnos, "nation '') and μουσική (mousike, "music ''), It is often defined as the anthropology or ethnography of music, or as musical anthropology. During its early development from comparative musicology in the 1950s, ethnomusicology was primarily oriented toward non-Western music, but for several decades it has included the study of all and any musics of the world (including Western art music and popular music) from anthropological, sociological and intercultural perspectives. Bruno Nettl once characterized ethnomusicology as a product of Western thinking, proclaiming that "ethnomusicology as western culture knows it is actually a western phenomenon ''; in 1992, Jeff Todd Titon described it as the study of "people making music ''.
Stated broadly, ethnomusicology may be described as a holistic investigation of music in its cultural contexts. Combining aspects of folklore, psychology, cultural anthropology, linguistics, comparative musicology, music theory, and history, ethnomusicology has adopted perspectives from a multitude of disciplines. This disciplinary variety has given rise to many definitions of the field, and attitudes and foci of ethnomusicologists have evolved since initial studies in the area of comparative musicology in the early 1900s. When the field first came into existence, it was largely limited to the study of non-Western music -- in contrast to the study of Western art music, which had been the focus of conventional musicology. In fact, the field was referred to early in its existence as "comparative musicology, '' defining Western musical traditions as the standard to which all other musics were compared, though this term fell out of use in the 1950s as critics for the practices associated with it became more vocal about ethnomusicology 's distinction from musicology. Over time, the definition broadened to include study of all the musics of the world according to certain approaches.
While there is not a single, authoritative definition for ethnomusicology, a number of constants appear in the definitions employed by leading scholars in the field. It is agreed upon that ethnomusicologists look at music from beyond a purely sonic and historical perspective, and look instead at music within culture, music as culture, and music as a reflection of culture. In addition, many ethnomusicological studies share common methodological approaches encapsulated in ethnographic fieldwork, often conducting primary fieldwork among those who make the music, learning languages and the music itself, and taking on the role of a participant observer in learning to perform in a musical tradition, a practice Hood termed "bi-musicality ''. Musical fieldworkers often also collect recordings and contextual information about the music of interest. Thus, ethnomusicological studies do not rely on printed or manuscript sources as the primary source of epistemic authority.
While the traditional subject of musicology has been the history and literature of Western art music, ethnomusicology was developed as the study of all music as a human social and cultural phenomenon. Oskar Kolberg is regarded as one of the earliest European ethnomusicologists as he first began collecting Polish folk songs in 1839. Comparative musicology, the primary precursor to ethnomusicology, emerged in the late 19th century and early 20th century. The International Musical Society in Berlin in 1899 acted as one of the first centers for ethnomusicology. Comparative musicology and early ethnomusicology tended to focus on non-Western music, but in more recent years, the field has expanded to embrace the study of Western music from an ethnographic standpoint.
The International Council for Traditional Music (founded 1947) and the Society for Ethnomusicology (founded 1955) are the primary international academic organizations for advancing the discipline of ethnomusicology.
Ethnomusicologists have offered varying definitions of the field. More specifically, scholars debate what constitutes ethnomusicology. Bruno Nettl distinguishes between discipline and field, believing ethnomusicology is the latter. There are multiple approaches to and challenges of the field. Some approaches reference "musical areas '' like "musical synthesis in Ghana '' while others emphasize "a study of culture through the avenue of music, to study music as social behavior. '' The multifaceted and dynamic approaches to ethnomusicology allude to how the field has evolved. The primary element that distinguishes ethnomusicology from musicology is the expectation that ethnomusicologists engage in sustained, diachronic fieldwork as their primary source of data.
There are many individuals and groups who can be connected to ethnomusicology. According to Merriam, some of these groups are "players of ethnic music, '' "music educators, '' "those who see ethnic music in the context of a global view of music, vis a vis, particularly, the study of Western "classical '' music, '' "made up of persons with a variety of interests, all of which are in some sense "applied '' like "professional ethnomusicologists, '' music therapists, the "musicologists '' and the "anthropologist. ''
Folklore and Folklorists were the precursors to the field of Ethnomusicology prior to WWII. They laid a foundation of interest in the preservation and continuation of the traditional folk musics of nations and an interest in the differences between the musics of various nations. Folklorists approached folklore through comparative methods; these methods sought to prove that folk music was simple but reflected the lives of the lower classes.
Folklore is defined as "traditional customs, tales, sayings, dances, or art forms preserved among a people. '' Bruno Nettl, an ethnomusicologist, defines folk music as "... the music in oral tradition found in those areas dominated by high cultures. '' This definition can be simplified as the traditional music of a certain people within a country or region.
Nationalism and the search for national identities was tied into folkloric studies. Southern and Eastern European composers incorporated folk music into their compositions to instill sentiments of nationalism in their audiences. Examples of such composers are Leoš Janáček, Edvard Grieg, Jean Sibelius, Béla Bartók, and Nikolai Rimsy - Korsakov. As Helen Meyers puts it, "Nationalist composers throughout Europe turned to peasant song to enrich the classical musical idiom of their country. '' In the United States, the preservation of folk music was a search for a sense of national tradition in the face of striking regional diversity.
"The collecting projects of southern and eastern Europeans of the second half of the 19th century were largely contributions to folkloric studies. These collectors feared that entire repertories were on the point of extinction, repertories that were thought a proper base for nationalist styles of art music. Early collectors were motivated by musical nationalism, theories of self - determination, and by hope for a musical rationale for a pan-Slavic identity... eastern Europeans explored their own linguistic setting, amassing large collections, thousands of song texts and, later, tunes, which they sought to classify and compare. '' The most well - known eastern European collectors were Béla Bartók (Hungary), Constantine Brāiloiu (Romania), Klement Kvitka (Ukraine), Adolf Chybinski (Poland), and Vasil Stoin (Bulgaria)
In 1931, Béla Bartók published an essay detailing his study of what he refers to as "Peasant music '' which "... connotes... all the tunes which endure among the peasant class of any nation, in a more or less wide area and for a more or less long period, and constitute a spontaneous expression of the musical feeling of that class. '' Bartok takes a comparative approach in his investigation of Hungarian folk music and believes that peasant music is primitive when compared to the music of the educated class.
In North America, state folklore societies were founded in the early 20th century and were dedicated to the collection and preservation of Old World folksong, i.e. music that came from Europe, Africa, or places outside of the U.S. during the settlement of the U.S. by colonizers; Native American music was also included in these societies. "In 1914 the US Department of Education instigated a rescue mission for ballads and folksongs, stimulating an era of collecting by local enthusiasts and academics that lasted through the Depression until World War II. '' Cecil Sharp, a lawyer turned musician, greatly contributed to the collection and preservation of British folk songs found in Appalachia. His interest in folk music began in 1903, when he discovered that a large amount of native folk song survived in England and published Folk Songs from Somerset (1904 - 1909). After he studied traditional English folk song in England, he traveled to the Appalachia region with his collaborator Maud Karpeles of the United States of America 3 times between the years 1916 and 1918 and discovered around 1,600 English tunes and variants. In 1909 Olive Dame Campbell traveled to the Appalachia region of the U.S. from Massachusetts and discovered that the ballads sung by the residents had strong ties to English and Scots - Irish folk songs. She collected ballads by having people sing them to her while she recorded them on a phonograph and transcribed them. She worked with Cecil Sharp and published the ballads that she had collected in English Folk Songs from the Southern Appalachians. The Appalachia region of the United States preserved old English and Scots - Irish folk songs because it was isolated from the city centers of the original thirteen colonies. The region is mountainous, meaning that not many people traveled in or out of it.
A controversy in the field of musicology arose surrounding Negro Spirituals. A musical spiritual is defined as "a religious song usually of a deeply emotional character that was developed especially among blacks in the southern U.S. '' The controversy revolved around whether the spirituals originated solely from Africa or if they were influenced by European music. Richard Wallaschek claimed that Negro Spirituals were merely imitations of European song, starting the debate on the subject. Erich von Hornbostel concluded that African and European musics were constructed on different principles and therefore could not be combined. The white origin theory argued that black music had been influenced by Anglo - American song and constituted an integral part of the British Tradition. Melville J. Herskovits and his student Richard A. Waterman discovered that "European and African forms had blended to produce new genres bearing features of both parent musics. European and African music... have many features in common, among them diatonic scales and polyphony. When these two musics met, during the slave era, it was natural for them to blend... '' Negro Spirituals were the first black musical genre comprehensively studied by scholars.
The interest in folklore did not end with the folklorists before World War II. After World War II, the International Folk Music Council was founded and was later renamed the International Council for Traditional Music. In 1978, Alan Lomax sought to classify and compare the music of world cultures through a system he named Cantometrics. This goal began with his idea that singing is a universal characteristic and therefore all musics of the world should have some comparable characteristics. Lomax believed that human migration could be tracked through songs; when a certain culture 's song or style is heard in another geographical region, it signifies that the two cultures interacted at some point. Lomax believes that song styles vary with productive range, political level, level of stratification of class, severity of sexual mores, balance of dominance between male and female, and the level of social cohesiveness. Lomax believed that all musics could be compared through the use these seven categories. He compared vocal performances through a set of characteristics, some of which are ' raspiness ', the use of meaningful words, and the use of meaningful syllables.
Comparative musicology is known as the cross-cultural study of music. Once referred to as "Musikologie '', comparative musicology emerged in the late 19th century in response to the works of Komitas Keworkian (also known as Komitas Vardapet or Soghomon Soghomonian.) A precedent to modern ethnomusicological studies, comparative musicology seeks to look at music throughout world cultures and their respective histories. Similarly to comparative linguistics, comparative musicology seeks to classify music of global cultures, illustrate their geographic distribution, explain universal musical trends, and understand the causation concerning the creation and evolution of music. Developed throughout the early 20th century, the term "comparative musicology '' emerged in an 1885 publication by Guido Adler, who added the term "comparative '' to musicology to describe works by scholars such as Alexander J. Ellis, whose academic process was founded in cross-cultural comparative studies. As one of four subdivisions of systematic musicology, "comparative musicology '' was once described by Adler himself as the task of "comparing tonal products, in particular the folk songs of various peoples, countries, and territories, with an ethnographic purpose in mind, grouping and ordering these according to... their characteristics ''.
Comparative musicology is typically thought to have been inspired by the work of Komitas Vardapet, an Armenian priest, musicology and choirmaster, who began his work studying in Berlin. His work primarily focused on the transcription of nearly 4000 pieces of Armenian, Turkish, and Kurdish folk music. His efforts to categorize and classify various music inspired others to do the same. This included Guido Adler, a Bohemian - Austrian musicologist and professor at the German University of Prague, Bohemia, who officially coined the term "vergleichende Musikwissenschaft '' (translated: comparative music science) in 1885 in response to the emergence of new academic methods of studying music. Around the same time of Adler 's development of the terminology associated with the study, the work of Alexander J. Ellis, who focused primarily on developing the cents system, was emerging as the foundation of the comparative elements of musicology. This cents system allowed from precise delineation of particular measurements denoted from pitch denoted as "hundredths of an equal - tempered semitone ''. Ellis also established a general definition for the pitch of a musical note, which he noted as "the number of... complete vibrations... made in each second by a particle of air while the note is heard ''.
Other contemporaries of Komitas, Ellis, and Adler included Erich von Hornbostel, and Carl Stumpf, who are typically credited with establishing comparative musicology as an official field separate from musicology itself. Von Hornbostel, who once stated that Ellis was the "true founder of comparative scientific musicology. '', was an Austrian scholar of music, while Stumpf was a German philosopher and psychologist. Together with Otto Abraham, they founded the "Berlin School of Comparative Musicology ''. Despite working together, Stumpf and Hornbostel had very different ideas regarding the foundation of the school. As Stumpf focused primarily from a psychological perspective, his position was founded in the belief of "unity of the human mind ''; his interests were on sensual experiences of tones and intervals and their respective ordering. In addition, his studies focused on testing his hypothesis of perceived fusion of tones. On the other hand, Hornbostel adopted Stumpf 's assignment, but rather approached the topic from his systematic and theoretical perspective, and did not concern himself with others. Through the institution, additional scholars such as Curt Sachs, Mieczyslaw Kolinski, George Herzog and Jaap Kunst (who first coined the term "ethno - musicology '' in a 1950 article) further expanded the field of comparative musicology. Additionally Hungarian composer Béla Bartók was conducting his own comparative studies at the time, focusing primarily on Hungarian (and other) folk music, in addition to the influence of European popular music on musical folk - lore of that particular geographic region.
Eventually, comparative musicology began experiencing changes. Following the Second World War, issues regarding the ethical contexts of comparative musicology began to emerge. As comparative musicology was founded primarily in Europe, most scholars based their comparisons in Western music. In an effort to adjust the Western bias present in their studies, academics such as Jaap Kunst began adjusting their approaches in analysis and fieldwork to become more globally focused. In the 1950s, comparative musicology continued to evolve to become ethnomusicology, but still remains today a field focused primarily on comparative studies in music.
Ethnomusicology has evolved both in terminology and ideology since its formal inception in the late 19th century. Although practices paralleling ethnomusicological work have been noted throughout colonial history, an Armenian priest known as Komitas Vardapet is considered one of the pioneers to ethnomusicology 's rise to prominence in 1896. While studying in Berlin at Frederick William University and attending the International Music Society, Vardapet transcribed over 3000 pieces of music. In his notes, he emphasized cultural and religious elements as well as social aspects of music and poetry. Inspired by these thoughts, many Western European nations began to transcribe and categorize music based on ethnicity and culture. Inspired by these thoughts, many Western European nations began to put many ethnic and cultural pieces of music onto paper and separate them. It was known very briefly in the 1880s as "Musikologie '' or "Musikgesellschaft, '' then "comparative musicology '' until around 1950, at which point the term "ethno - musicology '' was introduced to provide an alternative term to the traditional practices of comparative musicology. In 1956 the hyphen was removed with ideological intent to signify the discipline 's validity and independence from the fields of musicology and anthropology. These changes to the field 's name paralleled its internal shifts in ideological and intellectual emphasis.
Comparative musicology, an initial term intended to differentiate what would become ethnomusicology and musicology, was the area of study concerned with utilizing methods of acoustics to measure pitches and intervals, quantitatively comparing different kinds of music. Because of the high density of Europeans and Euro - Americans engaged with the area 's research, comparative musicology primarily surveyed the music of non-Western oral folk traditions and then compared them against western conceptions of music. After 1950, scholars sought to define the field more broadly and to eradicate these notions of ethnocentrism inherent to the study of comparative musicology; for example, Polish scholar Mieczyslaw Kolinski proposed that scholars in the field focus on describing and understanding musics within their own contexts. Kolinski also urged the field to move beyond ethnocentrism even as the term ethnomusicology grew in popularity as a replacement for what was once described by comparative musicology. He noted in 1959 that the term ethnomusicology limited the field, both by imposing "foreignness '' from a western standpoint and therefore excluding the study of western music with the same attention to cultural context that is given to otherized traditions, and by containing the field within anthropological problems rather than extending musical study to limitless disciplines within the humanities and the social sciences. Throughout critical developmental years in the 50s and 60s, ethnomusicologists shaped and legitimized the fledgling field through discussions of the responsibilities of ethnomusicologists and the ethical implications of ethnomusicological study, articulations of ideology, suggestions for practical methods of research and analysis, and definitions of music itself. It was also at this time that the emphasis of ethnomusicological work shifted from analysis to fieldwork, and the field began to develop research methods to center fieldwork over the traditional "armchair '' work.
In 1960, Mantle Hood, a leading pioneer of American ethnomusicology, established the Institute of Ethnomusicology at the University of California at Los Angeles, largely legitimizing the field and solidifying its position as an academic discipline.
In the 1970s, ethnomusicology was becoming more well known outside of the small circle of scholars who had founded and fostered the early development of the field. The influence of ethnomusicology spread to composers, music therapists, music educators, anthropologists, musicologists, and even popular culture. Ethnomusicology and its academic rigor lent newfound legitimacy, as well as useful theoretical and methodological frameworks, to projects that attempted to record, document, study, and / or compare musics from around the world. Alan Merriam classified these ethnomusicological participants in four groups:
One defining feature of this decade was the advent of anthropological influence within ethnomusicology. During this time, the discipline of ethnomusicology experienced a shift of focus away from musical data, such as pitch and formal structure, toward humans and human relationships. The incorporation of theoretical frameworks from the field of anthropology also led to an increasingly welcoming attitude towards accepting yet more fields of study, such as linguistics and psychology, into the broader pursuit of understanding music as it functions in (or "as '') culture.
Throughout this decade, the tensions regarding comparative approaches continued to come into question in ethnomusicological circles. The introduction of Alan Lomax 's system of cantometrics in the late 60s accounted for physical traits of vocal production like language / utterance, the distinctness of "singing voice '' from speaking voice, use of intonation, ornamentation, and pitch, consistency of tempo and volume, and the length of melodic phrases, and also the social elements like the participation of the audience and the way a performance is structured; in this way, it intended to make the data of ethnomusicological research more quantifiable and grant it scientific legitimacy. However, the system also legitimized comparative methods, thus extending the debate regarding the ethics of a comparative approach.
The 1980s ushered in a heightened awareness of bias and representation in ethnomusicology, meaning that ethnomusicologists took into consideration the effects of biases they brought to their studies as (usually) outgroup members, as well as the implications of how they choose to represent the ethnography and music of the cultures they study. Historically, Western field workers dubbed themselves experts on foreign music traditions once they felt they had a handle on the music, but these scholars ignored differences in worldview, priority systems, and cognitive patterns, and thought that their interpretation was truth. This type of research contributed to a larger phenomenon called Orientalism. ''
It was also during that time that Clifford Geertz 's concept of thick description spread from anthropology to ethnomusicology. In particular, ethnomusicologist Timothy Rice called for a more human - focused study of ethnomusicology, putting emphasis on the processes that bind music and society together in musical creation and performance. His model follows Alan Merriam 's identification of the field as "the study of music in culture. '' Rice puts more focus on historical change as well as the role of the individual in music - making. In particular, Rice 's model asks "how do people historically construct, socially maintain and individually create and experience music? '' In addition to presenting new models of thought, Rice 's ideas were also meant to unify the field of ethnomusicology into a more organized, cohesive field by providing an organized series of questions to address in the course of research.
Another concern that came to the forefront in the 1980s is known as reflexivity. The ethnomusicologist and his or her culture of study have a bidirectional, reflexive influence on one another in that it is possible not only for observations to affect the observer, but also for the presence of the observer to affect what they observe.
The awareness of the nature of oral tradition and the problems it poses for reliability of source came into discussion during the 1980s. The meaning of a particular song is in the kind of flux associated with any oral tradition, each successive performer bringing his or her own interpretation. Furthermore, regardless of original intended meaning, once a song is originally interpreted by the audience, recalled later in memory when recounting the performance to a researcher, interpreted by the researcher, and then interpreted by the researcher 's audience, it can, and does, take on a variety of different meanings. The 1980s can be classified by the emergence of awareness of cultural bias, the reliability of different sources, and a general skepticism as regards the validity of the researcher 's point of view and of the object of research itself.
By the late 1980s, the field of ethnomusicology had begun examining popular music and the effect of media on musics around the world. Several definitions of popular music exist but most agree that it is characterized by having widespread appeal. Peter Manuel adds to this definition by distinguishing popular music by its association with different groups of people, performances by musicians not necessarily trained or intellectual, and dispersion through broadcasting and recording. Theodor Adorno defined popular music by contrasting it from serious music, which is purposeful and generally cooperates within strictly structured rules and conventions. Popular music can operate less deliberately and focuses on creating a general effect or impression, usually focusing on emotion.
Although the music industry developed over several decades, popular music drew ethnomusicologists ' attention by the 90s because a standardizing effect began to develop. The corporate nature surrounding popular music streamlined it into a framework that focused on slight deviations from the accepted norm, creating what Adorno calls "pseudo-individualism ''; what the public would perceive as unique or organic would musically comply with standard, established musical conventions. Thus, a duality emerged from this standardization, an industry - driven manipulation of the public 's tastes to give people what they want while simultaneously guiding them to it. In the case of rock music, while the genre may have grown out of politicized forces and another form of meaningful motivation, the corporate influence over popular music became integral to its identity that directing public taste became increasingly easier. Technological developments allowed for easy dispersion of western music, causing the dominance of western music into rural and urbanized areas across the globe. However, because popular music assumes such a corporatized role and therefore remains subject to a large degree of standardization, ambiguity exists whether the music reflects actual cultural values or those only of the corporate sector seeking economic profit. Because popular music developed such a dependent relationship with media and the corporations surrounding it, where record sales and profit indirectly shaped musical decisions, the superstar person became an important element of popular music. From the fame and economic success surrounding such superstars, subcultures continued to arise, such as the rock and punk movements, only perpetuated by the corporate machine that also shaped the musical aspect of popular music.
Musical interaction through globalization played a huge role in ethnomusicology in the 1990s. Musical change was increasingly discussed. Ethnomusicologists began looking into a ' global village ', straying away from a specialized look at music within a specific culture. There are two sides to this globalization of music: on one hand it would bring more cultural exchange globally, but on the other hand it could facilitate the appropriation and assimilation of musics. Ethnomusicologists have approached this new combination of different styles of music within one music by looking at the musical complexity and the degree of compatibility. This Westernization and modernization of music created a new focus of study; ethnomusicologists began to look at how different musics interact in the 1990s.
By the 2000s, musicology (which had previously limited its focus almost exclusively to European art music), began to look more like ethnomusicology, with greater awareness of and consideration for sociocultural contexts and practices beyond analysis of art music compositions and biographical studies of major European composers.
Ethnomusicologists continued to deal with and consider the effects of globalization on their work. Bruno Nettl identifies Westernization and modernization as two concurrent and similar cultural trends that served to help streamline musical expression all over the world. While creeping globalization had an undeniable effect on cultural homogeneity, it also helped broaden musical horizons all over the world. Rather than simply lamenting the continuing assimilation of folk music of non-western cultures, many ethnomusicologists chose to examine exactly how non-western cultures dealt with the process of incorporating western music into their own practices to facilitate the survival of their previous traditions.
With the ongoing globalization of music, many genres influenced each other and elements from foreign music became more prevalent in mainstream popular music. Diaspora populations such as the Punjab population in England were studied due to the characteristics of their music showing signs of the effects of global media. Their music, like many other music of displaced cultures, was made up of elements from the folk music of their culture along with the popular music of their location. Through this process the idea of transnationalism in music occurred.
Additionally, postcolonial thought remained a focus of ethnomusicological literature. One example comes from Ghanaian ethnomusicologist Kofi Agawu; in Representing African Music: Postcolonial Notes, Queries, Positions, he details how the concept of "African rhythm '' has been misrepresented -- "African '' music is not a homogenous body as it is often perceived by Western thought. Its differences from Western music are often considered deficiencies, and the emphasis on "African rhythm '' prevalent throughout music scholarship prevents accurate comparison of other musical elements such as melody and harmony. Influenced by postcolonial thought theories, Agawu focuses on deconstructing the Eurocentric intellectual hegemony surrounding understanding African music and the notation of the music itself. Additionally, the new notational systems that have been developed specifically for African music further prevent accurate comparison due to the impossibility of applying these notations to Western music. Overall, Agawu implores scholars to search for similarities rather than differences in their examinations of African music, as a heightened exploration of similarities would be much more empowering and intellectually satisfying. This means by reexamining the role of European (through colonialism and imperialism) and other cultural influences have had on the history of "African '' music as individual nations, tribes, and collectively as a continent. The emphasis on difference within music scholarship has led to the creation of "default grouping mechanisms '' that inaccurately convey the music of Africa, such as claims that polymeter, additive rhythm and cross rhythm are prevalent throughout all African music. The actual complexity and sophistication of African music goes unexplored when scholars simply talk about it within these categories and move on. Agawu also calls for the direct empowerment of postcolonial African subjects within music scholarship, in response to attempts to incorporate native discourses into scholarship by Western authors that he believes have led to inaccurate representation and a distortion of native voices. Agawu worries of the possible implementation of the same Western ideals but with an "African '' face, "in what we have, rather, are the views of a group of scholars operating within a field of discourse, an intellectual space defined by Euro - American traditions of ordering knowledge ''.
Currently, scholarship that may have historically been identified as ethnomusicology is now classified as sound studies.
Ethnomusicology is not limited to the study of music from non-Western cultures. It is discipline that encompasses various approaches to the study of the many musics around the world that emphasize their particular dimensions (cultural, social, material, cognitive, biological, etc.) and contexts beyond their isolated sound components. Western music and its influences are thus also subject to ethnomusicological interest.
The influence of the media on consumerism in Western society is a bi-directional effect, according to Thomas Turino. A large part of self - discovery and feeling accepted in social groups is related to common musical tastes. Record companies and producers of music recognize this reality and respond by catering to specific groups. In the same way that "sounds and imagery piped in over the radio and Internet and in videos shape adolescent sense of gendered selves as well as generational and more specific cohort identities, "so do individuals shape the media 's marketing responses to musical tastes in Western popular music culture. The culmination of identity groups (teenagers in particular) across the country represents a significant force that can shape the music industry based on what is being consumed.
Ethnomusicologists often apply theories and methods from cultural anthropology, cultural studies and sociology as well as other disciplines in the social sciences and humanities. Though some ethnomusicologists primarily conduct historical studies, the majority are involved in long - term participant observation. Therefore, ethnomusicological work can be characterized as featuring a substantial, intensive ethnographic component.
Two approaches to ethnomusicological studies are common: the anthropological and the musicological. Ethnomusicologists using the anthropological approach generally study music to learn about people and culture. Those who practice the musicological approach study people and cultures to learn about music. Charles Seeger differentiated between the two approaches, describing the anthropology of music as studying the way that music is a "part of culture and social life '', while musical anthropology "studies social life as a performance, '' examining the way "music is part of the very construction and interpretation of social and conceptual relationships and processes. ''
Charles Seeger and Mantle Hood were two ethnomusicologists that adopted the musicological approach. Hood started one of the first American university programs dedicated to ethnomusicology, often stressing that his students must learn how to play the music they studied. Further, prompted by a college student 's personal letter, he recommended that potential students of ethnomusicology undertake substantial musical training in the field, a competency that he described as "bimusicality. '' This, he explained, is a measure intended to combat ethnocentrism and transcend problematic Western analytical conventions. Seeger also sought to transcend comparative practices by focusing on the music and how it impacted those in contact with it. Similar to Hood, Seeger valued the performance component of ethnomusicology.
Ethnomusicologists following the anthropological approach include scholars such as Steven Feld and Alan Merriam. The anthropological ethnomusicologists stress the importance of field work and utilizing participant observation. This can include a variety of distinct fieldwork practices, including personal exposure to a performance tradition or musical technique, participation in a native ensemble, or inclusion in a myriad of social customs. Similarly, Alan Merriam defined ethnomusicology as "music as culture, '' and stated four goals of ethnomusicology: to help protect and explain non-Western music, to save "folk '' music before it disappears in the modern world, to study music as a means of communication to further world understanding, and to provide an avenue for wider exploration and reflection for those who are interested in primitive studies. This approach emphasizes the cultural impact of music and how music can be utilized to further understand humanity.
The two approaches to ethnomusicology bring unique perspectives to the field, providing knowledge about both the effects culture as on music and the impact music has on culture.
The great diversity of musics found across the world has necessitated an interdisciplinary approach to ethnomusicological study. Analytical and research methods have changed over time, as ethnomusicology has continued solidifying its disciplinary identity, and as scholars have become increasingly aware of issues involved in cultural study (see Theoretical Issues and Debates). Among these issues are the treatment of Western music in relation to music from "other, '' non-Western cultures and the cultural implications embedded in analytical methodologies. Kofi Agawu (see 2000s) noted that scholarship on African music seems to emphasize difference further by continually developing new systems of analysis; he proposes the use of Western notation to instead highlight similarity and bring African music into mainstream Western music scholarship.
In seeking to analyze such a wide scope of musical genres, repertories, and styles, some scholars have favored an all - encompassing "objective '' approach, while others argue for "native '' or "subjective '' methodologies tailored to the musical subject. Those in favor of "objective '' analytical methods hold that certain perceptual or cognitive universals or laws exist in music, making it possible to construct an analytical framework or set of categories applicable across cultures. Proponents of "native '' analysis argue that all analytical approaches inherently incorporate value judgments and that, to understand music it is crucial to construct an analysis within cultural context. This debate is well exemplified by a series of articles between Mieczyslaw Kolinski and Marcia Herndon in the mid-1970s; these authors differed strongly on the style, nature, implementation, and advantages of analytical and synthetic models including their own. Herndon, backing "native categories '' and inductive thinking, distinguishes between analysis and synthesis as two different methods for examining music. By her definition, analysis seeks to break down parts of a known whole according to a definite plan, whereas synthesis starts with small elements and combines them into one entity by tailoring the process to the musical material. Herndon also debated on the subjectivity and objectivity necessary for a proper analysis of a musical system. Kolinski, among those scholars critiqued by Herndon 's push for a synthetic approach, defended the benefits of analysis, arguing in response for the acknowledgment of musical facts and laws.
As a result of the above debate and ongoing ones like it, ethnomusicology has yet to establish any standard method or methods of analysis. This is not to say that scholars have not attempted to establish universal or "objective '' analytical systems. Bruno Nettl acknowledges the lack of a singular comparative model for ethnomusicological study, but describes methods by Mieczyslaw Kolinski, Béla Bartók, and Erich von Hornbostel as notable attempts to provide such a model.
Perhaps the first of these objective systems was the development of the cent as a definitive unit of pitch by phonetician and mathematician Alexander J. Ellis (1885). Ellis used his system, which divided the octave into 1200 cents (100 cents in each Western semitone), as a means of analyzing and comparing scale systems of different musics. Ellis presented his research in "On the Musical Scales of Various Nations, '' making the influential statement that "musical scales were not acoustic givens but humanly organized preferences. '' Ellis 's study is also an early example of comparative musicological fieldwork (see Fieldwork).
Alan Lomax 's method of cantometrics employed analysis of songs to model human behavior in different cultures. He posited that there is some correlation between musical traits or approaches and the traits of the music 's native culture. Cantometrics involved qualitative scoring based on several characteristics of a song, comparatively seeking commonalities between cultures and geographic regions.
Mieczyslaw Kolinski measured the exact distance between the initial and final tones in melodic patterns. Kolinski refuted the early scholarly opposition of European and non-European musics, choosing instead to focus on much - neglected similarities between them, what he saw as markers of "basic similarities in the psycho - physical constitution of mankind. '' Kolinski also employed his method to test, and disprove, Erich von Hornbostel 's hypothesis that European music generally had ascending melodic lines, while non-European music featured descending melodic lines.
Adopting a more anthropological analytical approach, Steven Feld conducted descriptive ethnographic studies regarding "sound as a cultural system. '' Specifically, his studies of Kaluli people of Papua New Guinea use sociomusical methods to draw conclusions about its culture.
Bruno Nettl, Emeritus Professor of Musicology at Illinois University, defines fieldwork as "direct inspection (of music, culture, etc) at the source '', and states that "It is in the importance of fieldwork that anthropology and ethnomusicology are closest: It is a ' hallmark ' of both fields, something like a union card ''. The experience of an ethnomusicologist in the field is his / her data; experience, texts (e.g. tales, myths, proverbs), structures (e.g. social organization), and "impoderabilia of everyday life '' all contribute to an ethnomusicologist 's study. The importance of fieldwork in the field of ethnomusicology has required the development of effective methods to pursue fieldwork.
In the 19th century until the mid-20th century, European scholars (folklorists, ethnographers, and some early ethnomusicologists) who were motivated to preserve disappearing music cultures (from both in and outside of Europe), collected transcriptions or audio recordings on wax cylinders. Many such recordings were then stored at the Berlin Phonogramm - Archiv at the Berlin school of comparative musicology, which was founded by Carl Stumpf, his student Erich M. von Hornbostel, and medical doctor Otto Abraham. Stumpf and Hornbostel studied and preserved these recordings in the Berlin Archiv, setting the foundation for contemporary ethnomusicology. But, the "armchair analysis '' methods of Stumpf and Horbostel required very little participation in fieldwork themselves, instead using the fieldwork of other scholars. This differentiates Stumpf and Hornbostel from their present - day contemporaries, who now use their fieldwork experience as a main component in their research.
Ethnomusicology 's transition from "armchair analysis '' to fieldwork reflected ethnomusicologists trying to distance themselves from the field of comparative musicology in the period following World War II. Fieldwork emphasized face - to - face interaction to gather the most accurate impression and meaning of music from the creators of the music, in contrast with "armchair analysis '' that disconnected the ethnomusicologist from the individual or group of performers. David McAllester was paramount in helping the discipline transition from the "armchair analysis '' to culturally specific fieldwork. He worked with the Navajo, living with them so he could study Enemy Way music more intimately. This work involved an entirely different conceptualization of music than that generally accepted in the West. (Navajo, like some other languages, has no direct word for music, instead referring to it in the context of its function). Due to McAllester 's success, fieldwork became one of the most important parts of ethnomusicological study.
As technology advanced, researchers graduated from depending on wax cylinders and the phonograph to digital recordings and video cameras, allowing recordings to become more accurate representations of music studied. These technological advances have helped ethnomusicologists be more mobile in the field, but have also let some ethnomusicologists shift back to the "armchair analysis '' of Stumpf and Hornbostel. Since video recordings are now considered cultural texts, ethnomusicologists can conduct fieldwork by recording music performances and creating documentaries of the people behind the music, which can be accurately studied outside of the field. Additionally, the invention of the internet and forms of online communication could allow ethnomusicologists to develop new methods of fieldwork within a virtual community.
Heightened awareness of the need to approach fieldwork in an ethical manner arose in the 1970s in response to a similar movement within the field of anthropology. Mark Slobin writes in detail about the application of ethics to fieldwork. Several potential ethical problems that arise during fieldwork relate to the rights of the music performers. To respect the rights of performers, fieldwork often includes attaining complete permission from the group or individual who is performing the music, as well as being sensitive to the rights and obligations related to the music in the context of the host society.
Another ethical dilemma of ethnomusicological fieldwork is the inherent ethnocentrism (more commonly, eurocentrism) of ethnomusicology. Anthony Seeger, Emeritus Professor of Ethnomusicology at UCLA, has done seminal work on the notion of ethics within fieldwork, emphasizing the need to avoid ethnocentric remarks during or after the field work process. Emblematic of his ethical theories is a 1983 piece that describes the fundamental complexities of fieldwork through his relationship with the Suyá Indians of Brazil. To avoid ethnocentrism in his research, Seeger does not explore how singing has come to exist within Suyá culture, instead explaining how singing creates culture presently, and how aspects of Suyá social life can be seen through both a musical and performative lens. Seeger 's analysis exemplifies the inherent complexity of ethical practices in ethnomusicological fieldwork, implicating the importance for the continual development of effective fieldwork in the study of ethnomusicology.
Ethnomusicologists initially started to question the possibility of universals because they were searching for a new approach to explain musicology that differed from Guido Adler 's. Charles Seeger, for instance, categorized his interpretation of musical universals by using inclusion - exclusion styled Venn - diagrams to create five types universals, or absolute truths, of music. Universals in music are as hard to come by as universals in language since both potentially have a universal grammar or syntax. Dane Harwood noted that looking for causality relationships and "deep structure '' (as postulated by Chomsky) is a relatively fruitless way to look for universals in music. Yet the search for musical universalities has remained a topic amongst ethnomusicologists since Wilhelm Wundt who tried to prove that "all ' primitive ' peoples have monophonic singing and use intervals. '' Nettl shares the belief with his colleagues that trying to find a universal in music is unproductive because there will always be at least one instance proving that there is no musical universals. For example, George List writes, "I once knew a missionary who assured me that the Indians to whom he had ministered on the west coast of Mexico neither sang nor whistled. '' and ethnomusicologist David P. McAllester writes, "Any student of man must know that somewhere, someone is doing something that he calls music but nobody else would give it that name. That one exception would be enough to eliminate the possibility of a real universal. '' As a result of this gamesmanship of ethnomusicologists to poke holes in universals, focus shifted from trying to find a universal to trying to find near - universals, or qualities that may unite the majority of the world 's musics.
McAllester was a believer in near universals, he wrote, "I will be satisfied if nearly everybody does it, '' which is why he postulated that nearly all music has a tonal center, has a tendency to go somewhere, and also has an ending. However McAllester 's main point is that music transforms the everyday humdrum into something else, bringing about a heightened experience. He likens music to having an out of body experience, religion, and sex. It is music 's ability to transport people mentally, that is in his opinion a near universal that almost all musics share.
In response to McAllester 's Universal Perspectives on Music, Klaus P. Wachsmann counters that even a near universal is hard to come by because there are many variables when considering a very subjective topic like music and music should not be removed from culture as a singular variable. His approach, instead of finding a universal, was to create an amalgam of relations for sound and psyche: "(1) the physical properties of the sounds, (2) the physiological response to the acoustic stimuli, (3) the perception of sounds as selected by the human mind that is programmed by previous experiences, and (4) the response to the environmental pressures of the moment. In this tetradic schema lies an exhaustive model of the universals in music. '' However, Wachsmann does allow that they all had some influenced experience and this belief is echoed by another ethnomusicologist who shares the belief that the universal lies in the specific way music reaches the listener. "Whatever it communicates is communicated to the members of the in - group only, whoever they may be. This is as true of in - groups in our own society as in any other. Does "classical '' music communicate to every American? Does rock and roll communicate to every parent? ''
George List rebuts McAllester 's essay as does Wachsmann and in his rebuttal he posits that, "The only universal aspect of music seems to be that most people make it, '' Once again reinstating how difficult it was for ethnomusicologists to form a universal (as he uses the words "most people). List even goes as far as to say, "The entire panel discussion, and everything I have written here, are probably equally and universally unnecessary. Like Seeger, we have probably been talking and writing to ourselves. As far as ethnomusicologists are concerned, this is likely a universal phenomenon. '' This viewpoint asserts that the beneficiaries of finding a universal in music would not parallel the global objectives of unifying music. List also wanted to compare musics across cultures to prove that there was no universal because even between two people from the same culture there is variation. To do this, he would play Western classical music with descriptive titles for Africans and ask them to identify the title. He found that no one could subsume that a song like Sinding 's Rustles of Spring could possible be about spring.
Dane Harwood suggests that while there can be no cultural universals in music there exist universal modes of cognitively understanding that we all undergo when we listen to music. Harwood also highlighted several inherent issues with the notion of universality in music. The first of these is structure vs. function in music. He notes that human behavior is structurally predicated, and that as such, not all behavioral patterns (which some observe to find universals) imply functional activity in music. He also drew content versus process in musical behavior. In drawing this distinction, he highlighted that scholars studying universals should shift from studying what, in terms of content, various cultural groups play to the process by which individuals learn music. In summary, his view is that universals in music are not a matter of specific musical structure or function -- but of basic human cognitive and social processes construing and adapting to the real world.
It is often the case that interests in ethnomusicology stem from trends in anthropology, and this no different for symbols. In 1949, anthropologist Leslie White wrote, "the symbol is the basic unit of all human behavior and civilization, '' and that use of symbols is a distinguishing characteristic of humans. Once symbolism was at the core of anthropology, scholars sought to examine music "as a symbol or system of signs or symbols, '' leading to the establishment of the field of musical semiotics. Bruno Nettl discusses various issues relating ethnomusicology to musical semiotics, including the wide variety of culturally dependent, listener - derived meanings attributed to music and the problems of authenticity in assigning meaning to music. Some of the meanings that musical symbols can reflect can relate to emotion, culture, and behavior, much in the same way that linguistic symbols function.
The interdisciplinarity of symbolism in anthropology, linguistics, and musicology has generated new analytical outlooks (see Analysis) with different focuses: Anthropologists have traditionally conceived of whole cultures as systems of symbols, while musicologists have tended to explore symbolism within particular repertories. Structural approaches seek to uncover interrelationships between symbolic human behaviors.
In the 1970s, a number of scholars, including musicologist Charles Seeger and semiotician Jean - Jacques Nattiez, proposed using methodology commonly employed in linguistics as a new way for ethnomusicologists to study music. This new approach, widely influenced by the works of linguist Ferdinand de Saussure, philosopher Charles Sanders Peirce, and anthropologist Claude Lévi - Strauss, among others, focused on finding underlying symbolic structures in cultures and their music.
In a similar vein, Judith Becker and Alton L. Becker theorized the existence of musical "grammars '' in their studies of the theory of Javanese gamelan music. They proposed that music could be studied as symbolic and that it bears many resemblances to language, making semiotic study possible. Classifying music as a humanity rather than science, Nattiez suggested that subjecting music to linguistic models and methods might prove more effective than employing the scientific method. He proposed that the inclusion of linguistic methods in ethnomusicology would increase the field 's interdependence, reducing the need to borrow resources and research procedures from exclusively other sciences.
John Blacking was another ethnomusicologist who sought to create an ethnomusicological parallel to linguistic models of analysis. In his work on Venda music, he writes, "The problem of musical description is not unlike that in linguistic analysis: a particular grammar should account for the processes by which all existing and all possible sentences in the language are generated. '' Blacking sought more than sonic description. He wanted to create a musical analytical grammar, which he coined the Cultural Analysis of Music, that could incorporate both sonic description and how cultural and social factors influence structures within music. Blacking desired a unified method of musical analysis that "... can not only be applied to all music, but can explain both the form, the social and emotional content, and the effects of music, as systems of relationships between an infinite number of variables. '' Like Nattiez, Blacking saw a universal grammar as a necessary for giving ethnomusicology a distinct identity. He felt that ethnomusicology was just a "meeting ground '' for anthropology of music and the study of music in different cultures, and lacked a distinguishing characteristic in scholarship. He urged others in the field to become more aware and inclusive of the non-musical processes that occur in the making of music, as well as the cultural foundation for certain properties of the music in any given culture, in the vein of Alan Merriam 's work.
Some musical languages have been identified as more suited to linguistically - focused analysis than others. Indian music, for example, has been linked more directly to language than music of other traditions. Critics of musical semiotics and linguistic - based analytical systems, such as Steven Feld, argue that music only bears significant similarity to language in certain cultures and that linguistic analysis may frequently ignore cultural context.
Since ethnomusicology evolved from comparative musicology, some ethnomusicologists ' research features analytical comparison. The problems arising from using these comparisons stem from the fact that there are different kinds of comparative studies with a varying degree of understanding between them. Beginning in the late 60s, ethnomusicologists who desired to draw comparisons between various musics and cultures have used Alan Lomax 's idea of cantometrics. Some cantometric measurements in ethnomusicology studies have been shown be relatively reliable, such as the wordiness parameter, while other methods are not as reliable, such as precision of enunciation. Another approach, introduced by Steven Feld, is for ethnomusicologists interested in creating ethnographically detailed analysis of people 's lives; this comparative study deals with making pairwise comparisons about competence, form, performance, environment, theory, and value / equality. Bruno Nettl has noted as recently as 2003 that comparative study seems to have fallen in and out of style, noting that although it can supply conclusions about the organization of musicological data, reflections on history or the nature of music as a cultural artifact, or understanding some universal truth about humanity and its relationship to sound, it also generates a great deal of criticism regarding ethnocentrism and its place in the field.
The relevance and implications of insider and outsider distinctions within ethnomusicological writing and practice has been a subject of lengthy debate for decades, invoked by Bruno Nettl, Timothy Rice, and others. The question that causes such debate lies in the qualifications for an ethnomusicologist to research another culture when they represent an outsider, dissecting a culture that does n't belong to them. Historically, ethnomusicological research was tainted with a strong bias from Westerners in thinking that their music was superior to the musics they researched. From this bias grew an apprehension of cultures to allow ethnomusicologists to study them, thinking that their music would be exploited or appropriated. There are benefits to ethnomusicological research, i.e. the promotion of international understanding, but the fear of this "musical colonialism '' represents the opposition to an outsider ethnomusicologist in conducting his or her research on a community of insiders.
In The Study of Ethnomusicology: Thirty - One Issues and Concepts, Nettl discusses personal and global issues pertaining to field researchers, particularly those from a Western academic background. In a chapter that recounts his field recordings among Native Americans of the northern plains, for instance, he attempts to come to terms with the problematic history of ethnographic fieldwork, and envision a future trajectory for the practice in the 21st century and beyond. Considering that ethnomusicology is a field that intersects in a vast array of other fields in the social sciences and beyond, it focuses on studying people, and it is appropriate to encounter the issue of "making the unfamiliar, familiar, '' a phrase coined by William McDougall that is well known in social psychology. As in social psychology, the "unfamiliar '' is encountered in three different ways during ethnomusicological work: 1) two different cultures come into contact and elements of both are not immediately explicable to the other; 2) experts within a society produce new knowledge, which is then communicated to the public; and 3) active minorities communicate their perspective to the majority.
Nettl has also been vocal about the effect of subjective understanding on research. As he describes, a fieldworker might attempt immersing themselves into an outsider culture to gain full understanding. This, however, can begin to blind the researcher and take away the ability to be objective in what is being studied. The researcher begins to feel like an expert in a culture 's music when, in fact, they remain an outsider no matter the amount of research, because they are from a different culture. The background knowledge of each individual influences the focus of the study because of the comfort level with the material. Nettl characterizes the majority of outsiders as "simply members of Western society who study non-Western music, or members of affluent nations who study the music of the poor, or maybe city folk who visit the backward villages in their hinterland. '' This points to possible Eurocentric origins of researching foreign and exotic music. Within this outsider / insider dynamic and framework unequal power relations come into focus and question.
In addition to his critiques of the outsider and insider labels, Nettl creates a binary that roughly equates to Western and Nonwestern. He points out what he feels are flaws in Western thinking through the analyses of multiple societies, and promotes the notion of collaborating, with a greater focus on acknowledging the contribution of native experts. He writes, "The idea of joint research by an ' insider ' and an ' outsider ' has been mentioned as a way of bridging the chasms. '' In spite of his optimism, the actualization of this practice has been limited and the degree to which this can solve the insider / outsider dilemma is questionable. He believes that every concept is studied through a personal perspective, but "a comparison of viewpoints may give the broadest possible insight. ''
The position of ethnomusicologists as outsiders looking in on a music culture, has been discussed using Said 's theory of Orientalism. This manifests itself in the notion that music championed by the field may be, in many ways, a Western construction based on an imagined or romanticized view of "the Other '' situated within a colonial mindset. According to Nettl, there are three beliefs of insiders and members of the host culture that emerge that lead to adverse results. The three are as follows: (1) "Ethnomusicologists come to compare non-Western musics or other "other '' traditions to their own... in order to show that the outsider 's own music is superior, '' (2) Ethnomusicologists want to use their own approaches to non-Western music; '' and (3) "They come with the assumption that there is such a thing as African or Asian or American Indigenous music, disregarding boundaries obvious to the host. '' As Nettl argues, some of these concerns are no longer valid, as ethnomusicologists no longer practice certain orientalist approaches that homogenize and totalize various musics. He explores further intricacies within the insider / outsider dichotomy by deconstructing the very notion of insider, contemplating what geographic, social, and economic factors distinguish them from outsiders. He notes that scholars of "more industrialized African and Asian nations '' see themselves as outsiders in regards to rural societies and communities. Even though these individuals are in the minority, and ethnomusicology and its scholarship is generally written from a western perspective, Nettl disputes the notion of the native as the perpetual other and the outsider as the westerner by default.
Timothy Rice is another author who discusses the insider / outsider debate in detail but through the lens of his own fieldwork in Bulgaria and his experience as an outsider trying to learn Bulgarian music. In his experience, told through his book May it Fill Your Soul: Experiencing Bulgarian Music, he had a difficult time learning Bulgarian music because his musical framework was founded in a Western perspective. He had to "broaden his horizons '' and try instead to learn the music from a Bulgarian framework in order to learn to play it sufficiently. Although he did learn to play the music, and the Bulgarian people said that he had learned it quite well, he admitted that "there are still areas of the tradition (...) that elude my understanding and explanation. (...) Some sort of culturally sensitive understanding (...) will be necessary to close this gap. ''
Ultimately, Rice argues that despite the impossibility of being objective one 's work ethnomusicologists may still learn much from self - reflection. In his book, he questions about whether or not one can be objective in understanding and discussing art and, in accordance with the philosophies of phenomenology, argues that there can be no such objectivity since the world is constructed with preexisting symbols that distort any "true '' understanding of the world we are born into. He then suggests that no ethnomusicologist can ever come to an objective understanding of a music nor can an ethnomusicologist understand foreign music in the same way that a native would understand it. In other words, an outsider can never become an insider. However, an ethnomusicologist can still come to a subjective understanding of that music, which then shapes that scholar 's understanding of the outside world. From his own scholarship, Rice suggests "five principles for the acquisition of cognitive categories in this instrumental tradition '' among Bulgarian musicians. However, as an outsider, Rice notes that his "understanding passed through language and verbal cognitive categories '' whereas the Bulgarian instrumental tradition lacked "verbal markers and descriptors of melodic form '' so "each new student had to generalize and learn on his own the abstract conceptions governing melodies without verbal or visual aids. '' With these two different methods for learning music, an outsider searching for verbal descriptions versus an insider learning from imitating, represent the essential differences between Rice 's culture and the Bulgarian culture. These inherent musical differences blocked him from reaching the role of an insider.
Not only is there the question of being on the outside while studying another culture, but also the question of how to go about studying one 's own society. Nettl 's approach would be to determine how the culture classifies their own music. He is interested in the categories they would create to classify their own music. In this way, one would be able to distinguish themselves from the outsider while still having slight insider insight. Kingsbury believes it is impossible to study a music outside of one 's culture, but what if that culture is your own? One must be aware of the personal bias they may impose on the study of their own culture.
Kingsbury, an American pianist and ethnomusicologist, decided to reverse the common paradigm of a Westerner performing fieldwork in a non-western context, and apply fieldwork techniques to a western subject. In 1988 he published Music, Talent, and Performance: A Conservatory Cultural System, which detailed his time studying an American northeastern conservatory. He approached the conservatory as if it were a foreign land, doing his best to disassociate his experiences and prior knowledge of American conservatory culture from his study. In the book, Kingsbury analyzes conservatory conventions he and his peers may have overlooked, such as the way announcements are disseminated, to make assertions about the conservatory 's culture. For example, he concludes that the institutional structure of the conservatory is "strikingly decentralized. '' In light of professors ' absences, he questions the conservatory 's commitment to certain classes. His analysis of the conservatory contains four main elements: a high premium on teachers ' individuality, teachers ' role as nodal points that reinforce a patron - client - like system of social organization, this subsequent organization 's enforcement of the aural traditions of musical literacy, and the conflict between this client / patron structure and the school 's "bureaucratic administrative structure. '' Ultimately, it seems, Kingsbury thinks the conservatory system is inherently flawed. He emphasizes that he does n't intend to "chide '' the conservatory, but his critiques are nonetheless far from complementary.
Another example of western ethnomusicologists studying their native environments comes from Craft 's My Music: Explorations of Music in Daily Life. The book contains interviews from dozens of (mostly) Americans of all ages, genders, ethnicities, and backgrounds, who answered questions about the role of music in their lives. Each interviewee had their own unique, necessary, and deeply personal internal organization of their own music. Some cared about genre, others organized the music important to themselves by artist. Some considered music deeply important to them, some did not care about music at all.
Early in the history of the field of ethnomusicology, there was debate as to whether ethnomusicological work could be done on the music of western society, or whether its focus was exclusively toward non-western music. Some early scholars, such as Mantle Hood, argued that ethnomusicology had two potential focuses: the study of all non-European art music, and the study of the music found in a given geographical area.
However, even as early as the 1960s some ethnomusicologists were proposing that ethnomusicological methods should also be used to examine western music. For instance, Alan Merriam, in a 1960 article, defines ethnomusicology not as the study of non-European music, but as the study of music in culture. In doing so he discards some of the ' external ' focus proposed by the earlier (and contemporary) ethnomusicologists, who regarded non-European music as more relevant to the attention of scholars. Moreover, he expands the definition from being centered on music to including the study of culture as well.
Modern ethnomusicologists, for the most part, consider the field to apply to western music as well as non-western. However, ethnomusicology, especially in the earlier years of the field, was still primarily focused on non-western cultures; it is only in recent years that ethnomusicological scholarship has begun to allow more diversity with respect to both the cultures being studied and the methods by which these cultures may be studied.
Despite the increased acceptance of ethnomusicological examinations of western music, modern ethnomusicologists still focus overwhelmingly on non-western music. One of the few major examinations of western music from an ethnomusicological focus, as well as one of the earliest, is Henry Kingsbury 's book Music, Talent, and Performance. In his book, Kingsbury studies a conservatory in the north - eastern United States. His examination of the conservatory uses many of the traditional fieldwork methods of ethnomusicology.
Ethics is vital in the Ethnomusicology field because the product that comes out of fieldwork can be the result of the interaction between two cultures. Applying ethics to this field will confirm that each party is comfortable with the elements in the product and ensure that each party is compensated fairly for their contribution. To learn more about the monetary effects after a work is published, please see the copyright section of this page.
Ethics is defined by Merriam - Webster as, "the principles of conduct governing an individual or a group. '' In historical primary documents, there are accounts of interactions between two cultures. An example of this is Hernan Cortes ' personal journal during his exploration of the world, and his interaction with the Aztecs. He takes note of every interaction as he is a proxy the Spanish monarchy. This interaction was not beneficial to both parties because Cortes as a soldier conquered the Aztecs and seized their wealth, goods, and property in an unjust manner. Historically, interactions between two different cultures have not ended in both parties being uplifted. In fieldwork, the ethnomusicologist travels to a specific country with the intent to learn more about the culture, and while she is there, she will use her ethics to guide her in how she interacts with the indigenous people.
In the Society of Ethnomusicology, there is a committee on ethics that publishes the field 's official Position Statement on Ethics. Because ethnomusicology has some fundamental values that stem from anthropology, some of the ethics in ethnomusicology parallel some ethics in anthropology as well. The American Anthropology Association have statements about ethics and anthropological research which can be paralleled to ethnomusicology 's statement.
Mark Slobin, a twentieth century ethnomusicologist, observes that discussion on ethics has been founded on several assumptions, namely that: 1) "Ethics is largely an issue for ' Western ' scholars working in ' non-Western ' societies ''; 2) "Most ethical concerns arise from interpersonal relations between scholar and ' informant ' as a consequence of fieldwork ''; 3) "Ethics is situated within... the declared purpose of the researcher: the increase of knowledge in the ultimate service of human welfare. '' Which is a reference to Ralph Beals; and 4) "Discussion of ethical issues proceeds from values of Western culture. '' Slobin remarks that a more accurate statement might acknowledge that ethics vary across nations and cultures, and that the ethics from the cultures of both researcher and informant are in play in fieldwork settings.
Some case scenarios for ethically ambiguous situations that Slobin discusses include the following:
Slobin 's discussion of ethical issues in ethnomusicology was surprising in that he highlights the ethnomusicology community 's apathy towards the public discussion of ethical issues, as evidenced by the lackluster response of scholars at a large 1970 SEM meeting.
Slobin also points out a facet of ethical thinking among ethnomusicologists in that many of the ethical rules deal with Westerners studying in non-Western, third world countries. Any non-Western ethnomusicologists are immediately excluded from these rules, as are Westerner 's studying Western music.
He also highlights several prevalent issues in ethnomusicology by using hypothetical cases from an American Anthropological Association newsletter and framing them in terms of ethnomusicology. For example: "You bring a local musician, one of your informants, to the West on tour. He wants to perform pieces you feel inappropriately represent his tradition to Westerns, as the genre reinforces Western stereotypes about the musician 's homeland... do you have the right to overrule the insider when he is on your territory? ''
Ethnomusicologists also tend towards the discussion of ethics in sociological contexts. Timothy Taylor writes on the byproducts of cultural appropriation through music, arguing that the 20th century commodification of non-western musics serves to marginalize certain groups of musicians who are not traditionally integrated into the western music production and distribution industries. Steven Feld argues that Ethnomusicologists also have their place in analyzing the ethics of popular music collaboration, such as Paul Simon 's work with traditional zydeco, Chicano, and South African beats on Graceland. Feld notes that inherently imbalanced power dynamics within musical collaboration can contribute to cultural exploitation.
When talking about ethics in ethnomusicology it is imperative that I remain specific about who it applies to. An ethnomusicologist must consider ethics if he comes from a culture that is different from the culture that he wants to conduct his research on. However, an ethnomusicologist that conducts research on a culture that is how own does not have to weigh ethics. For example, music scholar, Kofi Agawu writes about African music and all of its significant aspects. He mentions the dynamics of music among the generations, the significance of the music, and the effects of the music on the society. Agawu highlights that some scholars glaze over the spirit of African music and argues that this is problematic because the spirit is one of the most essential components in the music. Agawu is also a scholar from Africa, more specifically Ghana, so he knows more about the culture because he is a part of that culture. Being a native of the culture that one is studying is beneficial because of the instinctive insight that one has been taught since birth.
Martin Rudoy Scherzinger, another twentieth - century ethnomusicologist, contests the claim that copyright law is inherently conducive to exploitation of non-Westerners by Western musicologists for a variety of reasons some of which he quotes from other esteemed ethnomusicologists: some non-Western pieces are uncopyrightable because they are orally passed down, some "sacred songs are issued forth by ancient spirits or gods '' giving them no other to obtain copyright, and the concept of copyright may only be relevant in "commercially oriented societies ''. Furthermore, the very notion of originality (in the West especially) is a quagmire in and of itself. Scherzinger also brought several issues to the forefront that also arise with metaphysical interpretations of authorial autonomy because of his idea that Western aesthetical interpretation is not different than non-Western interpretation. That is, all music is "for the good of mankind '' yet the law treats it differently.
Gender concerns have more recently risen to prominence in the methodology of ethnomusicology. Modern researchers often criticize historical works of ethnomusicology as showing gender - biased research and androcentric theoretical models that do not reflect reality. There are many reasons for this issue. Historically, ethnomusicological fieldwork often focused on the musical contributions of men, in line with the underlying assumption that male - dominated musical practices were reflective of musical systems of a society as a whole. Other gender - biased research may have been attributed to the difficulty in acquiring information on female performers without infringing upon cultural norms that may not have accepted or allowed women to perform in public (reflective of social dynamics in societies where men dominate public life and women are mostly confined to the private sphere.). Finally, men have traditionally dominated fieldwork and institutional leadership positions and tended to prioritize the experiences of men in the cultures they studied. With a lack of accessible female informants and alternative forms of collecting and analyzing musical data, ethnomusicological researchers such as Ellen Koskoff believe that we may not be able to fully understand the musical culture of a society. Ellen Koskoff quotes Rayna Reiter, saying that bridging this gap would explain the "seeming contradiction and internal workings of a system for which we have only half the pieces. ''
Despite the historical trend of overlooking gender, modern ethnomusicologists believe that studying gender can provide a useful lens to understand the musical practices of a society. Considering the divisions of gender roles in society, ethnomusicologist Ellen Koskoff writes: "Many societies similarly divide musical activity into two spheres that are consistent with other symbolic dualisms '', including such culture - specific, gender based dualisms as private / public, feelings / actions, and sordid (provocative) / holy. In some cultures, music comes to reflect those divisions in such a way that women 's music and instrumentation is viewed as "non-music '' as opposed to men 's "music ''. These and other dualities of musical behavior can help demonstrate societal views of gender, whether the musical behavior support or subvert gender roles.
Women contributed extensively to ethnomusicological fieldwork from the 1950s onward, but women 's and gender studies in ethnomusicology took off in the 1970s. Ellen Koskoff articulates three stages in women 's studies within ethnomusicology: first, a corrective approach that filled in the basic gaps in our knowledge of women 's contributions to music and culture; second, a discussion of the relationships between women and men as expressed through music; third, integrating the study of sexuality, performance studies, semiotics, and other diverse forms of meaning - making. Since the 1990s, ethnomusicologists have begun to consider the role of the fieldworker 's identity, including gender and sexuality, in how they interpret the music of other cultures. For example, Susan McClary 's watershed book Feminine Endings (1991) shows "relationships between musical structure and socio - cultural values '' and has influenced ethnomusicologists, although it is not an ethnomusicological book. There is a general understanding that Western conceptions of gender, sexuality, and other social constructions do not necessarily apply to other cultures and that a predominantly Western lens can cause various methodological issues for researchers.
The concept of gender in ethnomusicology is also tied to the idea of reflexive ethnography, in which researchers critically consider their own identities in relation to the societies and people they are studying. For example, Katherine Hagedorn uses this technique in Divine Utterances: The Performance of Afro - Cuban Santeria. Throughout her description of her fieldwork in Cuba, Hagedorn remarks how her positionality, through her whiteness, femaleness, and foreignness, afforded her luxuries out of reach of her Cuban counterparts. Her positionality also put her in an "outsider '' perspective on Cuban culture and affected her ability to access the culture as a researcher on Santeria. Her whiteness and foreignness, she writes, allowed her to circumvent intimate inter-gender relations centered around performance using the bata drum. Unlike her Cuban female counterparts who faced stigma, she was able to learn to play the bata and thus formulate her research.
In the first chapter of his book Popular Music of the Non-Western World, Peter Manual examines the effect technology has had on non-western music by discussing its ability to disseminate, change, and influence music around the world. He begins with a discussion about definitions of genres, highlighting the difficulties in distinguishing between folk, classical, and popular music, within any one society. By tracing the historical development of the phonograph, radio, cassette recordings, and television, Manuel shows that, following the practice set in the western world, music has become a commodity in many societies, that it no longer has the same capacity to unite a community, to offer a kind of "mass catharsis '' as one scholar put it. He stresses that any modern theoretical lens from which to view music must account for the advent of technology.
Copyright is defined as "the exclusive right to make copies, license, and otherwise exploit a literary, musical, or artistic work, whether printed, audio, video, etc. '' It is imperative because copyright is what dictates where credit and monetary awards should be allocated. While ethnomusicologists conduct fieldwork, they sometimes must interact with the indigenous people. Additionally, since the purpose of the ethnomusicologists being in a particular country is so that she can collect information to make conclusions. The researchers leave their countries of interest with interviews, videos, text, along with multiple other sources of valuable. Rights surrounding music ownership are thus often left to ethics.
The specific issue with copyright and ethnomusicology is that copyright is an American right; however, some ethnomusicologists conduct research in countries that are outside of the United States. For example, Anthony Seeger details his experience while working with the Suyá people of Brazil and the release of their song recordings. The Suyá people have practices and beliefs about inspiration and authorship, where the ownership roots from the animals, spirits, and "owned '' by entire communities. In the American copyright laws, they ask for a single original author, not groups of people, animals, or spirits. Situations like Seeger 's then result in the indigenous people not being given credit or sometime into being able to have access to the monetary wealth that may come along with the published goods. Seeger also mentions that in some cases, copyright will be granted, but the informant - performer, the researcher, the producer, and the organization funding the research -- earns the credit that the indigenous people deserve. '' ''
Martin Scherzinger mentions how copyright is dealt with in the Senegal region of Africa. The copyright benefits, such as royalties, from music are allocated to the Senegalese government, and then the government in turn hosts a talent competition, where the winner receives the royalties. Scherzinger offers a differing opinion on copyright, and argues that the law is not inherently ethnocentric. He cites the early ideology behind copyright in the 19th century, stating that spiritual inspiration did not prohibit composers from being granted authorship of their works. Furthermore, he suggests that group ownership of a song is not significantly different from the collective influence in Western classical music of several composers on any individual work.
A solution to some of the copyright issue that the ethnomusicology is having is to push for the broadening of the copyright laws in the United States. To broaden is equivalent to changing who can be cited as the original author of a piece of work to include the values that specific societies have. In order for this to be done, ethnomusicologists have to find a common ground amongst the copyright issues that they have encountered collectively.
The origins of music and its connections to identity have been debated throughout the history of ethnomusicology. Thomas Turino defines "self, '' "identity, '' and "culture '' as patterns of habits, such that tendencies to respond to stimuli in particular ways repeat and reinscribe themselves. Musical habits and our responses to them lead to cultural formations of identity and identity groups. For Martin Stokes, the function of music is to exercise collective power, creating barriers among groups. Thus, identity categories such as ethnicity and nationality are used to indicate oppositional content.
Just as music reinforces categories of self - identification, identity can shape musical innovation. George Lipsitz 's 1986 case study of Mexican - American music in Los Angeles from the 1950s to the 1980s posits that Chicano musicians were motivated to integrate multiple styles and genres in their music to represent their multifaceted cultural identity. By incorporating Mexican folk music and modern - day barrio influences, Mexican rock - and - roll musicians in LA made commercially successful postmodern records that included content about their community, history, and identity. Lipsitz suggests that the Mexican community in Los Angeles reoriented their traditions to fit the postmodern present. Seeking a "unity of disunity '', minority groups can attempt to find solidarity by presenting themselves as sharing experience with other oppressed groups. According to Lipsitz, this disunity creates a disunity that furthermore engenders a "historical bloc, '' made up of numerous, multifaceted, marginalized cultures.
Lipsitz noted the bifocal nature of the rock group Los Lobos is particularly exemplary of this paradox. They straddled the line by mixing traditional Mexican folk elements with white rockabilly and African American rhythm and blues, while simultaneously conforming to none of the aforementioned genres. That they were commercially successful was unsurprising to Lipsitz - their goal in incorporating many cultural elements equally was to play to everyone. In this manner, in Lipsitz 's view, the music served to break down barriers in its up front presentation of "multiple realities ''.
Lipsitz describes the weakening effect that the dominant (Los Angeles) culture imposes on marginalized identities. He suggests that the mass media dilutes minority culture by representing the dominant culture as the most natural and normal. Lipsitz also proposes that capitalism turns historical traditions of minority groups into superficial icons and images in order to profit on their perception as "exotic '' or different. Therefore, the commodification of these icons and images results in the loss of their original meaning.
Minorities, according to Lipsitz, can not fully assimilate nor can they completely separate themselves from dominant groups. Their cultural marginality and misrepresentation in the media makes them aware of society 's skewed perception of them. Antonio Gramsci suggests that there are "experts in legitimization '', who attempt to legitimize dominant culture by making it look like it is consented by the people who live under it. He also proposes that the oppressed groups have their own "organic intellectuals '' who provide counter-oppressive imagery to resist this legitimization. For example, Low riders used irony to poke fun at popular culture 's perception of desirable vehicles, and bands like Los Illegals provided their listening communities with a useful vocabulary to talk about oppression and injustice.
Michael M.J. Fisher breaks down the following main components of postmodern sensibility: "bifocality or reciprocity of perspectives, juxtaposition of multiple realities - intertextuality, inter-referentiality, and comparisons through families of resemblance. '' A reciprocity of perspectives makes music accessible inside and outside of a specific community. Chicano musicians exemplified this and juxtaposed multiple realities by combining different genres, styles, and languages in their music. This can widen the music 's reception by allowing it to mesh within its cultural setting, while incorporating Mexican history and tradition. Inter-referentiality, or referencing relatable experiences, can further widen the music 's demographic and help to shape its creators ' cultural identities. In doing so, Chicano artists were able to connect their music to "community subcultures and institutions oriented around speech, dress, car customizing, art, theater, and politics. '' Finally, drawing comparisons through families of resemblance can highlight similarities between cultural styles. Chicano musicians were able to incorporate elements of R&B, Soul, and Rock n ' Roll in their music.
Music is not only used to create group identities, but to develop personal identity as well. Frith describes music 's ability to manipulate moods and organize daily life. Susan Crafts studied the role of music in individual life by interviewing a wide variety of people, from a young adult who integrated music in every aspect of her life to a veteran who used music as a way to escape his memories of war and share joy with others. Many scholars have commented on the associations that individuals develop of "my music '' versus "your music '': one 's personal taste contributes to a sense of unique self - identity reinforced through the practices of listening to and performing certain music.
As part of a broader inclusion of identity politics (see Gender), ethnomusicologists have become increasingly interested in how identity shapes ethnomusicological work. Fieldworkers have begun to consider their positions within race, economic class, gender, and other identity categories and how they relate to or differ from cultural norms in the areas they study. Katherine Hagedorn 's 2001 Book Divine Utterances: The Performance of Afro - Cuban Santería is an example of experiential ethnomusicology, which "... incorporates the author 's voice, interpretations, and reactions into the ethnography, musical and cultural analysis, and historical context. '' The book received the Society for Ethnomusicology 's prestigious Alan P. Merriam prize in 2002, marking a broad acceptance of this new method in the institutions of ethnomusicology.
Music forms a large part of national sentiment, or patriotism. National musical styles may include songs and genres used for reification of traditional culture, or more explicitly political purposes. One example of this phenomenon can be observed in Frederic Chopin, a composer with Polish ancestry who became internationally recognized within the Western classical music sphere. By invoking traditional Polish forms in his compositions, Chopin became known as a symbol for Polish national identity on an international scale. Martin Stokes pointed out that this work of associating Chopin 's music with Polish national identity fell more upon political ideologues than upon the actual content of the music itself.
According to Turino, the most important factor that the successful infusion of nationalism within a nation requires is emotion. While the Rhodesian government failed to capture the emotions of the people of Zimbabwe, Robert Mugabe did not. He formed the Youth League, which ended up leading most party activities. These activities took the form of nationalist rallies complete with singing, thudding drums, and tribal dances that were "designed to create an inclusive image of the nation - to - be. '' These rallies advocated for a return to the old traditional African rule. By performing certain songs or dances at rallies, that music becomes closely associated with the rallies. A country 's national anthem, for example, has a strong association with national pride, and therefore nationalism. The performance of tribal dances that originated in a specific nation display the artistry and unique nature of its people. Folk songs are the same way, as they are unique to the country in which they originated. For a spectator, watching or listening to this music from your country stirs a great sense of national pride inside them, and that leads to the emotion that is required in nationalism.
Other examples also demonstrate how national musical styles are constructed in the service of unifying a nation - state, particularly in line with modernizing developments. Thomas Turino examined musical nationalism and its implications within and across national boundaries, defining musical nationalism as the incorporation of local ' folk ' elements elite or cosmopolitan musical styles. Colonial Western powers had a hand in introducing cosmopolitan styles, and Zimbabweans were also influential in shaping their sense of nation. This process of nation - building required a constant negotiation the need for local emblems (such as national music) and the need to define one 's own nation in relation to others throughout the world. Such a balancing act necessitated the creation of a new national culture through modernizing reforms. National music in Zimbabwe, then, can be described as any music, foreign or domestic, used in the process of forwarding nationalist movements. Turino describes how his study necessitated working with a wide range of people involved with music, including "white music teachers, farmer 's wives, and suburbanites; music - business executives, producers, and managers; professionals of the black middle class; Shona peasants of different age groups in the rural northeast; members of regional, working - class, dance - drumming clubs in the townships around Harare...; a number of mbira players dedicated to indigenous Shona practices and knowledge...; members of professional "folkloric '' dance groups; state cultural officials and workers; my black, middle - class neighbors in Mabelreign suburb where I lived with my family; and a broad spectrum of popular guitar - band musicians. '' This diverse group of people all help to define the national music of Zimbabwe, bringing in both local and cosmopolitan perspectives. Turino emphasizes that these perspectives are not blended into a single vision. For example, practices rooted in times that predate colonial influences can still differ from those associated with modern cosmopolitanism.
In Afghanistan in the early twentieth century, radio technology was used to broadcast nationalist ideals to rural areas. Music played on Afghan radio blended Hindustani, Persian, Pashtun, and Tadjik traditions into a single national style, blurring ethnic lines at the behest of nationalist "ideologues. '' Early twentieth - century reformers in Turkey also made use of the radio. The nationalist state broadcast European classical music to try and unify Turkey into a modern "Western '' nation, but rural populations had little interest in this music. Instead, they could tune in to Egyptian radio. In Brazil, the scholar Mário de Andrade theorized an integration of European, African, and Amerindian styles to create a Brazilian national music. This constructed hybrid identity persisted in academic studies of Brazilian folklore and anthropology. Turino mentions how nationalists in Zimbabwe used music as a means for unification both before and after their independence in 1980. During the war for independence through the 1970s, ZANU nationalists and their ZANLA guerillas used political songs as a means for engaging lower classes in the nationalist fight. Traditional Shona cultural practices, including music, were cited as areas of common ground through which ZANU tried to bridge divides between economic classes, attempting to create a more unified Zimbabwe amidst the fight for independence. After Robert Mugabe gained power over newly - independent Zimbabwe in 1980, the government established nationalist arts programs such as a National Dance Company and various other institutions to preserve and define Zimbabwe 's artistic culture.
The construction of national musical styles can also originate from outside a given nation. In colonial west Africa, British rulers tended to endorse the music of rural "tribal '' peoples -- but not the music of more economically elite indigenous groups -- as representative of national identity. Well - to - do populations were seen as a threat to British sovereignty, whereas lower - class peoples were not. In a related vein, the French scholar Radolphe d'Erlanger undertook a project of reviving older musical forms in Tunisia in order to reconstruct "Oriental music '' played on instruments such as the ud and ghazal. Performing ensembles using such instruments were featured at the 1932 Congress of Arab Music in Cairo.
These ideas are transferable to any musical culture around the world. In his book, Music, Race, and Nation, Peter Wade discusses the "community of anonymity, '' or the "identification of the citizen with other unknown compatriots in a common allegiance to the nation itself ''. Music allows for people within a nation to connect when they would not be able to otherwise. Two people from villages or cities hundreds or thousands of miles away may have different traditions or customs, but they also might know the same folk music from their nation, and can connect that way.
Music is largely responsible for national identity. In addition, it can differentiate between different social classes, giving it social identity. On the idea of identity in music, Wade says, "A focus on the constitutive nature of music helps to grasp changing relations between musical style and identity since, instead of seeing a given style as essentially linked to a given identity, one can see how the same or similar musical styles can help constitute various identities in different contexts ''. And while some music is linked to a specific identity, music oftentimes crosses boundaries and changes associations over the course of many years. This identity gives music a certain amount of authenticity for people, and that is another contributing factor to nationalism. According to Wade, part of Colombia 's specific nationalist music identity originated from its position on the Caribbean Sea. As a major center for commercialization in the region, Colombian culture (including its music) became standardized as it was influenced by outside cultures. And soon, Colombians began to consume different types of music than before, as "tastes and ideas were all being formed within the whole changing ideological fields of nation, race, gender, sexuality, modernity, and tradition ''. He points specifically to the success of Carlos Vives 's 1993 album featuring modernized versions of vallenato songs from the 1930s from the Caribbean coastal region. Updating those songs gave them new life and identity for a modern audience. Also, music began to be imported from other nations, further changing the musical styles as musicians found new inspiration in other cultures.
The example of the Colombian coastal region demonstrates how globalization has effected some nationalism. As music becomes more and more globalized, the concept of what a nation 's music identity is can fade. Performers often face a choice, to stick to their traditional musical roots or conform to popular trends and present modernized fusion of cultures in their music. This dilemma will only continue to grow in the years to come.
World beat can be considered contrary to nationalism, designed to appeal to a more global audience by mixing styles of disparate cultures. This may compromise cultural authenticity while commodifying cultural tradition. (see Globalization)
Through technological advances of the late twentieth century, recordings of music from around the world began to enter the Euro - American music industry. Timothy Taylor discusses the arrival and development of new terminology in the face of globalization. The term "World Music '' was developed and popularized as a way to categorize and sell "non-Western '' music. The term "world music '' began in the 1990s as a marketing term to classify and sell records from other parts of the world under a unified label, and world music was introduced as a category in the Grammys shortly thereafter. The term "world beat '' was also employed in the 90s to refer specifically to pop music, but it has fallen out of use. The issue that these terms present is that they perpetuate an "us '' vs. "them '' dichotomy, effectively "othering '' and combining musical categories outside of the Western tradition for the sake of marketing.
Turino proposes the use of the term "cosmopolitanism '' rather than "globalization '' to refer to contact between world musical cultures, since this term suggests a more equitable sharing of music traditions and acknowledges that multiple cultures can productively share influence and ownership of particular musical styles. Another relevant concept is glocalization, and a typology for how this phenomenon impacts music (called "Glocal BAG model '') is proposed in the book Music Glocalization.
The issue of appropriation has come to the forefront in discussions of music 's globalization, since many Western European and North American artists have participated in "revitalization through appropriation, '' claiming sounds and techniques from other cultures as their own and adding them to their work without properly crediting the origins of this music. Steven Feld explores this issue further, putting it in the context of colonialism: admiration alone of another culture 's music does not constitute appropriation, but in combination with power and domination (economic or otherwise), insufficient value is placed on the music 's origin and appropriation has taken place. If the originators of a piece of music are given due credit and recognition, this problem can be avoided.
Feld criticizes the claim to ownership of appropriated music through his examination of Paul Simon 's collaboration with South African musicians during the recording of his Graceland album. Simon paid the South African musicians for their work, but he was given all of the legal rights to the music. Although it was characterized by what seems to be fair compensation and mutual respect, Feld suggests that Simon should n't be able to claim complete ownership of the music. Feld holds the music industry accountable for this phenomenon, because the system gives legal and artistic credit to major contract artists, who hire musicians like "wage laborers '' due to how little they were paid or credit they were given. This system rewards the creativity of bringing the musical components of a song together, rather than rewarding the actual creators of the music. As globalization continues, this system allows capitalist cultures to absorb and appropriate other musical cultures while receiving full credit for its musical arrangement.
Feld also discusses the subjective nature of appropriation, and how society 's evaluation of each case determines the severity of the offense. When American singer James Brown borrowed African rhythms, and when the African musician Fela Kuti borrowed elements of style from James Brown, their common roots of culture made the connection more acceptable to society. However, when the Talking Heads borrow style from James Brown, the distancing between the artist and the appropriated music is more overt to the public eye, and the instance becomes more controversial from an ethical standpoint. Thus, the issue of cycling Afro - Americanization and Africanization in Afro - American / African musical material and ideas is embedded in "power and control because of the nature of record companies and their cultivation of an international pop music elite with the power to sell enormous numbers of recordings. ''
Dr. Gibb Schreffler also examines globalization and diaspora through the lens of Punjabi pop music. Schreffler 's writing on bhangra music is a commentary on the dissemination of music and its physical movement. As he suggests, the function and reception of Punjabi music changed drastically as increasing migration and globalization catalyzed the need for a cohesive Punjabi identity, emerging "as a stopgap during a period that was marked by the combination of large - scale experiences of separation from the homeland with as yet poor communication channels. '' In the 1930s, before liberation from British colonial rule, music that carried the explicit "Punjabi '' label primarily had the function of regional entertainment. In contrast, Punjabi music of the 1940s and 50s coincided with a wave of Punjabi nationalism that replaced regionalist ideals of earlier times. The music began to form a particular genteel identity in the 1960s that was accessible even to Punjabi expatriates.
During the 1970s and 80s, Punjabi pop music began to adhere aesthetically to more cosmopolitan tastes, often overshadowing music that reflected a truly authentic Punjabi identity. Soon after, the geographic and cultural locality of Punjabi pop became a prevalent theme, reflecting a strong relationship to the globalization of widespread preferences. Schreffler explains this shift in the role of Punjabi pop in terms of different worlds of performance: amateur, professional, sacred, art, and mediated. These worlds are primarily defined by the act and function of the musical act, and each is a type of marked activity that influences how the musical act is perceived and the social norms and restrictions to which it is subject. Punjabi popular music falls into the mediated world due to globalization and the dissemination of commercial music separating performance from its immediate context. Thus, Punjabi popular music eventually "evolved to neatly represent certain dualities that are considered to characterize Punjabi identity: East / West, guardians of tradition / embracers of new technology, local / diaspora. ''
Another example of globalization in music concerns cases of traditions that are officially recognized by UNESCO, or promoted by national governments, as cases of notable global heritage. In this way, local traditions are introduced to a global audience as something that is so important as to both represent a nation and be of relevance to all people everywhere.
Cognitive psychology, neuroscience, anatomy, and similar fields have endeavored to understand how music relates to an individual 's perception, cognition, and behavior. Research topics include pitch perception, representation and expectation, timbre perception, rhythmic processing, event hierarchies and reductions, musical performance and ability, musical universals, musical origins, music development, cross-cultural cognition, evolution, and more.
From the cognitive perspective, the brain perceives auditory stimuli as music according to gestalt principles, or "principles of grouping. '' Gestalt principles include proximity, similarity, closure, and continuation. Each of the gestalt principles illustrates a different element of auditory stimuli that cause them to be perceived as a group, or as one unit of music. Proximity dictates that auditory stimuli that are near to each other are seen as a group. Similarity dictates that when multiple auditory stimuli are present, the similar stimuli are perceived as a group. Closure is the tendency to perceive an incomplete auditory pattern as a whole -- the brain "fills in '' the gap. And continuation dictates that auditory stimuli are more likely to be perceived as a group when they follow a continuous, detectable pattern.
The perception of music has a quickly growing body of literature. Structurally, the auditory system is able to distinguish different pitches (sound waves of varying frequency) via the complementary vibrating of the eardrum. It can also parse incoming sound signals via pattern recognition mechanisms. Cognitively, the brain is often constructionist when it comes to pitch. If one removes the fundamental pitch from a harmonic spectrum, the brain can still "hear '' that missing fundamental and identify it through an attempt to reconstruct a coherent harmonic spectrum.
Research suggests that much more is learned perception, however. Contrary to popular belief, absolute pitch is learned at a critical age, or for a familiar timbre only. Debate still occurs over whether Western chords are naturally consonant or dissonant, or whether that ascription is learned. Relation of pitch to frequency is a universal phenomenon, but scale construction is culturally specific. Training in a cultural scale results in melodic and harmonic expectations.
Cornelia Fales has explored the ways that expectations of timbre are learned based on past correlations. She has offered three main characteristics of timbre: timbre constitutes a link to the external world, it functions as perceptualization 's primary instrument and it is a musical element that we experience without informational consciousness. Fales has gone into in - depth exploration of humankind 's perceptual relation to timbre, noting that out of all of the musical elements, our perception of timbre is the most divergent from the physical acoustic signal of the sound itself. Growing from this concept, she also discusses the "paradox of timbre '', the idea that perceived timbre exists only in the mind of the listener and not in the objective world. In Fales ' exploration of timbre, she discusses three broad categories of timbre manipulation in musical performance throughout the world. The first of these, timbral anomaly by extraction, involves the breaking of acoustic elements from the perceptual fusion of timbre of which they were part, leading to a splintering of the perceived acoustic signal (demonstrated in overtone singing and didjeridoo music). The second, timbral anomaly by redistribution, is a redistribution of gestalt components to new groups, creating a "chimeric '' sound composed of precepts made up of components from several sources (as seen in Ghanaian balafon music or the bell tone in barbershop singing). Finally, timbral juxtaposition consists of juxtaposing sounds that fall on opposing ends of a continuum of timbral structure that extends from harmonically - based to formant - structured timbres (as demonstrated again in overtone singing or the use of the "minde '' ornament in Indian sitar music). Overall, these three techniques form a scale of progressively more effective control of perceptualization as reliance on the acoustic world increases. In Fales ' examinations of these types of timbre manipulation within Inanga and Kubandwa songs, she synthesizes her scientific research on the subjective / objective dichotomy of timbre with culture - specific phenomena, such as the interactions between music (the known world) and spiritual communication (the unknown world).
Cognitive research has also been applied to ethnomusicological studies of rhythm. Some ethnomusicologists believe that African and Western rhythms are organized differently. Western rhythms may be based on ratio relationships, while African rhythms may be organized additively. In this view, that means that Western rhythms are hierarchical in nature, while African rhythms are serial. One study that provides empirical support for this view was published by Magill and Pressing in 1997. The researchers recruited a highly experienced drummer who produced prototypical rhythmic patterns. Magill and Pressing then used Wing & Kristofferson 's (1973) mathematical modeling to test different hypotheses on the timing of the drummer. One version of the model used a metrical structure; however, the authors found that this structure was not necessary. All drumming patterns could be interpreted within an additive structure, supporting the idea of a universal ametrical organization scheme for rhythm.
Researchers have also attempted to use psychological and biological principles to understand more complex musical phenomena such as performance behavior or the evolution of music, but have reached few consensuses in these areas. It is generally accepted that errors in performance give insight into perception of a music 's structure, but these studies are restricted to Western score - reading tradition thus far. Currently there are several theories to explain the evolution of music. One of theories, expanded on by Ian Cross, is the idea that music piggy - backed on the ability to produce language and evolved to enable and promote social interaction. Cross bases his account on the fact that music is a humanly ancient art seen throughout nearly every example of human culture. Since opinions vary on what precisely can be defined as "music '', Cross defines it as "complexly structured, affectively significant, attentionally entraining, and immediately -- yet indeterminately -- meaningful, '' noting that all known cultures have some art form that can be defined in this way. In the same article, Cross examines the communicative power of music, exploring its role in minimizing within - group conflict and bringing social groups together and claiming that music could have served the function of managing intra and inter-group interactions throughout the course of human evolution. Essentially, Cross proposes that music and language evolved together, serving contrasting functions that have been equally essential to the evolution of humankind. Additionally, Bruno Nettl has proposed that music evolved to increase efficiency of vocal communication over long distances, or enabled communication with the supernatural.
The idea of decolonization is not new to the field of ethnomusicology. As early as 2006, the idea became a central topic of discussion for the Society for Ethnomusicology. In humanities and education studies, the term decolonization is used to describe "an array of processes involving social justice, resistance, sustainability, and preservation. However, in ethnomusicology, decolonization is considered to be a metaphor by some scholars. Linda Tuhiwai Smith, a professor of indigenous studies in New Zealand, offered a look into the shift decolonization has taken: "decolonization, once viewed as the formal process of handing over the instruments of government, is now recognized as a long - term process involving the bureaucratic, cultural, linguistic and psychological divesting of colonial power. '' For ethnomusicology, this shift means that fundamental changes in power structures, worldviews, academia, and the university system need to be analyzed as a confrontation of colonialism. A proposed decolonized approach to ethnomusicology involves reflecting on the philosophies and methodologies that constitute the discipline.
The decolonization of ethnomusicology takes multiple paths. These proposed approaches are: i) ethnomusicologists addressing their roles as scholars, ii) the university system being analyzed and revised, iii) the philosophies, and thus practices, as a discipline being changed. The Fall / Winter 2016 issue of the Society of Ethnomusicology 's Student News contains a survey about decolonizing ethnomusicology to see their readers ' views on what decolonizing ethnomusicology entailed. The different themes were: i) decentering ethnomusicology from the United States and Europe, ii) expanding / transforming the discipline, iii) recognizing privilege and power, and iv) constructing spaces to actually talk about decolonizing ethnomusicology among peers and colleagues.
One of the issues proposed by Brendan Kibbee for "decolonizing '' ethnomusicology is how scholars might reorganize the disciplinary practices to broaden the base of ideas and thinkers. One idea posed is that the preference and privilege of the written word more than other forms of media scholarship hinders a great deal of potential contributors from finding a space in the disciplinary sphere. The possible influence of the Western bias against listening as an intellectual practice could be a reason for a lack of diversity of opinion and background within the field. The colonial aspect comes from the European prejudices regarding subjects ' intellectual abilities derived from the Kantian belief that the act of listening being seen as a "danger to the autonomy of the enlightened liberal subject. '' As colonists reorganized the economic global order, they also created a system that tied social mobility to the ability to assimilate European schooling, forming a meritocracy of sorts. Many barriers keep "postcolonial '' voices out of the academic sphere such as the inability to recognize intellectual depth in local practices of knowledge production and transmission. If ethnomusicologists start to rethink the ways in which they communicate with one another, the sphere of academia could be opened to include more than just the written word, allowing new voices to participate.
Another topic of discussion for decolonizing ethnomusicology is the existence of archives as a legacy of colonial ethnomusicology or a model for digital democracy. Comparative musicologists used archives such as the Berlin Phonogramm - Archiv to compare the musics of the world. The current functions of such public archives within institutions and on the internet has been analyzed by ethnomusicologists. Activists and ethnomusicologists working with archives of recorded sound, like Aaron Fox, associate professor of music and director of the Center for Ethnomusicology at Columbia University, have undertaken recovery and repatriation projects as an attempt at decolonizing the field. Another ethnomusicologist who has developed major music repatriation projects is Diane Thram, who works with the International Library of African Music. Similar work has been dedicated towards film and field video.
Benjamin Koen, Gregory Barz, and Kenneth Brunnel - Smith characterize medical ethnomusicology as "a new field of integrative research and applied practice that explores holistically the roles of music and sound phenomena and related praxes in any cultural and clinical context of health and healing ''. Medical ethnomusicology often focuses specifically on music and its effect on the biological, psychological, social, emotional, and spiritual realms of health. In this regard, medical ethnomusicologists have found applications of music to combat a broad range of health issues; music has found usage in the treatment of autism, dementia, AIDS and HIV, while also finding use in social and spiritual contexts through the restoration of community and the role of music in prayer and meditation.
Theresa Allison served at a nursing home in 2006 - 2007, studying the effects of music on the residents of the home. The Home, as she refers to it in her publications, was rather unusual in that music was of utmost priority: the Home has over 60 hours of music and performing arts activities scheduled weekly, and dozens of residents actively participate in songwriting groups. The Home has produced a professional CD, Island on a Hill, and an award - winning documentary, A ' Specially Wonderful Affair, both in 2002. With such emphasis placed in the arts, Allison concludes that the creation and performance of music has increased the residents ' quality of life by allowing them to remain active in their society through songwriting. Songwriting in the Home has fostered a sense of community among the residents and a means of transcending the institution by bringing in memories and experiences from outside their physical space.
Music has been found to be particularly effective in combatting dementia. In 2008, Kenneth Brummel - Smith studied the state of care for those with Alzheimer 's disease (AD) and found care to be largely unsatisfactory. Rather, Brummel - Smith looks toward music as the cure to Alzheimer 's disease; he observes that nursing home residents with AD are capable of participating in structured music activities late into the disease, and that music can be used to enhance social, emotional, and cognitive skills in those with AD. Brummel - Smith calls for a more interdisciplinary approach to combatting AD, which may include music therapy if it may be suitable for a given AD patient.
Kathleen Van Buren conducted fieldwork in Nairobi and Sheffield with the purpose of enacting positive change in the context of HIV and AIDS in each environment. Van Buren speaks about utilizing music as an agent of social change; in Nairobi, she witnessed individuals and organizing drawing upon music and the arts to promote social change within their respective communities. In Sheffield, Van Buren offered a new class on "Music and Health '' at the University of Sheffield as well as World AIDS Day event with the theme "Hope through the Arts ''. After the conclusion of these events, Van Buren published her findings and offered a to - do list for the ethnomusicology of HIV and AIDS. Namely, she urged ethnomusicologists to research and engage with the music community in order to facilitate the development of educational and therapy programs to further the fight against AIDS.
An example of music used in the treatment of autism is the Music - Play Project (MPP). The MPP was inspired by an interaction in which Benjamin Koen and Michael Bakan invited their families to eat dinner together. After dinner, Koen and Bakan took out some drums and started playing music together. Mark, a 3 - year - old member of the Bakan family who suffered from Asperger 's syndrome, began engaging with the music in a way that Koen describes as "miraculous ''. Bakan describes Mark 's experience as a "remarkable and positive behavioral / emotional transformation in him ''. After that moment, Koen and Bakan began hosting a six week program in which three children, accompanied by their parents, engage in freeform improvisational music creation alongside Koen and Bakan. Participants play on gamelan gongs, metallophones, and drums, which are chosen for providing rewarding sounds with minimal technique and effort from the participants. Koen and Bakan recount that the Music - Play Project has proven successful in providing children with key experiences that are particularly important in development, including forming new friendships among participants and facilitating fresh interactions between children and their parents.
Koen 's research has also extended into the realm of the spiritual; he analyzed the role of music in maddâh, a form of prayer. Koen believed in music - prayer dynamics, which modeled the dynamic relationship between music, prayer, and healing. Maddâh is unique in that it encompasses all three elements of music - prayer dynamics over the course of a ceremony. Koen describes a maddâh ceremony as such: "during a maddâh ceremony, one experiences music alone, prayer alone, music and prayer combined, and unified music - prayer ''. In particular, Koen focused on the restorative properties of maddâh as it was utilized in Badakhshan, Tajikistan. Being the economically poorest region of Tajikistan, Badakhshan 's culture of health care is precarious at best; there is no running water or plumbing in homes, satisfactory nutrition is hard to come by, and the psychological distress that comes with these factors leads to an abundance of health issues. As a result, maddâh is utilized to maintain health and prevent illness. Koen conducted an experiment of 40 participants from Badakhshan, in which Koen assessed the stress levels of those who participated in a maddâh ceremony using physical indicators of stress such as blood pressure and heart rate. In conclusion, Koen observed an overarching destressing effect in those who participated in maddâh, regardless of the role they assumed in the ceremony. Koen attributes this to familiarity: "there was enough familiarity to engage a cultural aesthetic and dynamic that allowed a person 's consciousness to approach a flexible state, which here facilitated a state of lower stress ''. Koen also noted that participants had positive feelings regarding maddâh; many of the participants commented that maddâh relieves them of their emotional burdens.
Many universities around the world offer ethnomusicology classes and act as centers for ethnomusicological research. The linked list includes graduate and undergraduate degree - granting programs.
The definition of ethnochoreology stands to have many similarities with the current way of studying of ethnomusicology. With ethnochoreology 's roots in anthropology taken into account, and by the way that it is studied in the field, dance is most accurately defined and studied within this academic circle as two parts: as "an integral part of a network of social events '' and "as a part of a system of knowledge and belief, social behavior and aesthetic norms and values ''. That is, the study of dance in its performance aspects -- the physical movements, costumes, stages, performers, and accompanied sound - along with the social context and uses within the society where it takes place.
Because of its growth alongside ethnomusicology, the beginning of ethnochoreology also had a focus on the comparative side of things, where the focus was on classifying different styles based on the movements used and the geographical location in a way not dissimilar to Lomax. This is best shown in "Benesh Notation and Ethnochoreology '' in 1967 which was published in the ethnomusicology journal, where Hall advocates using the Benesh notation as a way of documenting dance styles so that it is "possible to compare styles and techniques in detail -- even ' schools ' within one style -- and individual variations in execution from dancer to dancer. '' In the seventies and eighties, like with ethnomusicology, ethnochoreolology had a focus on a very specific communicative type of "folklore music '' performed by small groups and the context and performance aspects of dance were studied and emphasized to be a part of a whole "folkloric dance '' that needed to be preserved. This was influenced by the same human centered "thick description '' way of study that had moved into ethnomusicology. However, at this time, the sound and dance aspects of the performances studied were still studied and analyzed a bit separately from the context and social aspects of the culture around the dance.
Beginning in the mid eighties, there has been a reflexively interpretive way of writing about dance in culture that is more conscious of the impact of the scholar within the field and how it affects the culture and its relationship with the dance that the scholar is looking into. For example, because most scholars until this point were searching for the most "authentic '' folk, there was a lack of study on individual performers, popular dances, and dances of subgroups groups within a culture such as women, youth, and members of the LGBT community. In contrast, this newer wave of study wanted a more open study of dance within a culture. Additionally, there was a shift for a more mutual give and take between the scholar and the subjects, who in field work, also assist the scholars as teachers and informants.
Although there are many similarities between ethnochoreology and ethnomusicology, there is a large difference between the current geographical scope of released studies. For example, from the beginning of ethnomusicology, there was a large focus on African and Asian musics, due to them seeming to have the most deviation from their norm while ethnochoreology, also beginning in Europe, has long had extensive studies of the Eastern European "folk dances '' with relatively little of African and Asian dances, however American studies have delved into Native American and Southeast Asian dance. � However, the very basis of this being a difference could be challenged on the basis that many European ethnomusicological and ethnochoreological studies have been done on the "home '' folk music and dance in the name of nationalism.
"ICTM Study Group on Ethnochoreology ''. International Council for Traditional Music., beginning in 1962 as a Folk Dance Commission before giving itself its current name in the early seventies. With the objectives of promoting research, documentation, and interdisciplinary study of dance; providing a forum for cooperation among scholars and students of ethnochoreology by means of international meetings, publications, and correspondence; and contributing to cultural and societal understandings of humanity through the lens of dance, the Study Group meets biennially for a conference.
The "Congress on Research in Dance ''., CORD for short, currently known as the Dance Studies Association (DSA) after merging with the Society of Dance History Scholars began 1964. CORD 's purposes are stated to be to encourage research in all aspects of dance and related fields; to foster the exchange of ideas, resources, and methodologies through publications, international and regional conferences and workshops; and to promote the accessibility of research materials. CORD publishes a peer - reviewed scholarly journal called The Dance Research Journal, twice annually.
For articles on significant individuals in this discipline, see the List of ethnomusicologists.
|
how many towns and villages in the uk | List of towns in the United Kingdom - wikipedia
In England, Wales and Northern Ireland, a town traditionally was a settlement which had a charter to hold a market or fair and therefore became a "market town ''. In Scotland, the equivalent is known as a burgh (pronounced (ˈbʌɾə)). There are two types of burgh: royal burghs and burghs of barony.
The Local Government Act 1972 allows civil parishes in England and Wales to resolve themselves to be Town Councils, under section (245 subsection 6), which also gives the chairman of such parishes the title ' town mayor '. Many former urban districts and municipal boroughs have such a status, along with other settlements with no prior town status.
In more modern times it is often considered that a town becomes a city (or a village becomes a town) as soon as it reaches a certain population, although this is an informal definition and no particular numbers are agreed upon.
The cultural importance placed on charters remains, and it is not an unusual event for towns across the UK to celebrate their charter in an annual Charter Day (normally a fair or mediaeval market).
Ranked by population:
|
who theorized that mass could be turned into energy | Mass -- energy equivalence - wikipedia
In physics, mass -- energy equivalence states that anything having mass has an equivalent amount of energy and vice versa, with these fundamental quantities directly relating to one another by Albert Einstein 's famous formula:
E = m c 2 (\ textstyle E = mc ^ (2))
This formula states that the equivalent energy (E) can be calculated as the mass (m) multiplied by the speed of light (c = about 7008300000000000000 ♠ 3 × 10 m / s) squared. Similarly, anything having energy exhibits a corresponding mass m given by its energy E divided by the speed of light squared c2. Because the speed of light is a very large number in everyday units, the formula implies that even an everyday object at rest with a modest amount of mass has a very large amount of energy intrinsically. Chemical, nuclear, and other energy transformations may cause a system to lose some of its energy content (and thus some corresponding mass), releasing it as the radiant energy of light or as thermal energy for example.
Mass -- energy equivalence arose originally from special relativity as a paradox described by Henri Poincaré. Einstein proposed it on 21 November 1905, in the paper Does the inertia of a body depend upon its energy - content?, one of his Annus Mirabilis (Miraculous Year) papers. Einstein was the first to propose that the equivalence of mass and energy is a general principle and a consequence of the symmetries of space and time.
A consequence of the mass -- energy equivalence is that if a body is stationary, it still has some internal or intrinsic energy, called its rest energy, corresponding to its rest mass. When the body is in motion, its total energy is greater than its rest energy, and, equivalently, its total mass (also called relativistic mass in this context) is greater than its rest mass. This rest mass is also called the intrinsic or invariant mass because it remains the same regardless of this motion, even for the extreme speeds or gravity considered in special and general relativity.
The mass -- energy formula also serves to convert units of mass to units of energy (and vice versa), no matter what system of measurement units is used.
The formula was initially written in many different notations, and its interpretation and justification was further developed in several steps. In "Does the inertia of a body depend upon its energy content? '' (1905), Einstein used V to mean the speed of light in a vacuum and L to mean the energy lost by a body in the form of radiation. Consequently, the equation E = mc was not originally written as a formula but as a sentence in German saying that if a body gives off the energy L in the form of radiation, its mass diminishes by L / V. A remark placed above it informed that the equation was approximated by neglecting "magnitudes of fourth and higher orders '' of a series expansion.
In May 1907, Einstein explained that the expression for energy ε of a moving mass point assumes the simplest form, when its expression for the state of rest is chosen to be ε = μV (where μ is the mass), which is in agreement with the "principle of the equivalence of mass and energy ''. In addition, Einstein used the formula μ = E / V, with E being the energy of a system of mass points, to describe the energy and mass increase of that system when the velocity of the differently moving mass points is increased.
In June 1907, Max Planck rewrote Einstein 's mass -- energy relationship as M = E + pV / c, where p is the pressure and V the volume to express the relation between mass, its latent energy, and thermodynamic energy within the body. Subsequently, in October 1907, this was rewritten as M = E / c and given a quantum interpretation by Johannes Stark, who assumed its validity and correctness (Gültigkeit).
In December 1907, Einstein expressed the equivalence in the form M = μ + E / c and concluded: A mass μ is equivalent, as regards inertia, to a quantity of energy μc. (...) It appears far more natural to consider every inertial mass as a store of energy.
In 1909, Gilbert N. Lewis and Richard C. Tolman used two variations of the formula: m = E / c and m = E / c, with E being the relativistic energy (the energy of an object when the object is moving), E is the rest energy (the energy when not moving), m is the relativistic mass (the rest mass and the extra mass gained when moving), and m is the rest mass (the mass when not moving). The same relations in different notation were used by Hendrik Lorentz in 1913 (published 1914), though he placed the energy on the left - hand side: ε = Mc and ε = mc, with ε being the total energy (rest energy plus kinetic energy) of a moving material point, ε its rest energy, M the relativistic mass, and m the invariant (or rest) mass.
In 1911, Max von Laue gave a more comprehensive proof of M = E / c from the stress -- energy tensor, which was later (1918) generalized by Felix Klein.
Einstein returned to the topic once again after World War II and this time he wrote E = mc in the title of his article intended as an explanation for a general reader by analogy.
Mass and energy can be seen as two names (and two measurement units) for the same underlying, conserved physical quantity. Thus, the laws of conservation of energy and conservation of (total) mass are equivalent and both hold true. Einstein elaborated in a 1946 essay that "the principle of the conservation of mass (...) proved inadequate in the face of the special theory of relativity. It was therefore merged with the energy conservation principle -- just as, about 60 years before, the principle of the conservation of mechanical energy had been combined with the principle of the conservation of heat (thermal energy). We might say that the principle of the conservation of energy, having previously swallowed up that of the conservation of heat, now proceeded to swallow that of the conservation of mass -- and holds the field alone. ''
If the conservation of mass law is interpreted as conservation of rest mass, it does not hold true in special relativity. The rest energy (equivalently, rest mass) of a particle can be converted, not "to energy '' (it already is energy (mass)), but rather to other forms of energy (mass) that require motion, such as kinetic energy, thermal energy, or radiant energy. Similarly, kinetic or radiant energy can be converted to other kinds of particles that have rest energy (rest mass). In the transformation process, neither the total amount of mass nor the total amount of energy changes, since both properties are connected via a simple constant. This view requires that if either energy or (total) mass disappears from a system, it is always found that both have simply moved to another place, where they are both measurable as an increase of both energy and mass that corresponds to the loss in the first system.
When an object is pushed in the direction of motion, it gains momentum and energy, but when the object is already traveling near the speed of light, it can not move much faster, no matter how much energy it absorbs. Its momentum and energy continue to increase without bounds, whereas its speed approaches (but never reaches) a constant value -- the speed of light. This implies that in relativity the momentum of an object can not be a constant times the velocity, nor can the kinetic energy be a constant times the square of the velocity.
A property called the relativistic mass is defined as the ratio of the momentum of an object to its velocity. Relativistic mass depends on the motion of the object, so that different observers in relative motion see different values for it. If the object is moving slowly, the relativistic mass is nearly equal to the rest mass and both are nearly equal to the usual Newtonian mass. If the object is moving quickly, the relativistic mass is greater than the rest mass by an amount equal to the mass associated with the kinetic energy of the object. As the object approaches the speed of light, the relativistic mass grows infinitely, because the kinetic energy grows infinitely and this energy is associated with mass.
The relativistic mass is always equal to the total energy (rest energy plus kinetic energy) divided by c. Because the relativistic mass is exactly proportional to the energy, relativistic mass and relativistic energy are nearly synonyms; the only difference between them is the units. If length and time are measured in natural units, the speed of light is equal to 1, and even this difference disappears. Then mass and energy have the same units and are always equal, so it is redundant to speak about relativistic mass, because it is just another name for the energy. This is why physicists usually reserve the useful short word "mass '' to mean rest mass, or invariant mass, and not relativistic mass.
The relativistic mass of a moving object is larger than the relativistic mass of an object that is not moving, because a moving object has extra kinetic energy. The rest mass of an object is defined as the mass of an object when it is at rest, so that the rest mass is always the same, independent of the motion of the observer: it is the same in all inertial frames.
For things and systems made up of many parts, like an atomic nucleus, planet, or star, the relativistic mass is the sum of the relativistic masses (or energies) of the parts, because energies are additive in isolated systems. This is not true in open systems, however, if energy is subtracted. For example, if a system is bound by attractive forces, and the energy gained due to the forces of attraction in excess of the work done is removed from the system, then mass is lost with this removed energy. For example, the mass of an atomic nucleus is less than the total mass of the protons and neutrons that make it up, but this is only true after this energy from binding has been removed in the form of a gamma ray (which in this system, carries away the mass of the energy of binding). This mass decrease is also equivalent to the energy required to break up the nucleus into individual protons and neutrons (in this case, work and mass would need to be supplied). Similarly, the mass of the solar system is slightly less than the sum of the individual masses of the sun and planets.
For a system of particles going off in different directions, the invariant mass of the system is the analog of the rest mass, and is the same for all observers, even those in relative motion. It is defined as the total energy (divided by c) in the center of mass frame (where by definition, the system total momentum is zero). A simple example of an object with moving parts but zero total momentum is a container of gas. In this case, the mass of the container is given by its total energy (including the kinetic energy of the gas molecules), since the system total energy and invariant mass are the same in any reference frame where the momentum is zero, and such a reference frame is also the only frame in which the object can be weighed. In a similar way, the theory of special relativity posits that the thermal energy in all objects (including solids) contributes to their total masses and weights, even though this energy is present as the kinetic and potential energies of the atoms in the object, and it (in a similar way to the gas) is not seen in the rest masses of the atoms that make up the object.
In a similar manner, even photons (light quanta), if trapped in a container space (as a photon gas or thermal radiation), would contribute a mass associated with their energy to the container. Such an extra mass, in theory, could be weighed in the same way as any other type of rest mass. This is true in special relativity theory, even though individually photons have no rest mass. The property that trapped energy in any form adds weighable mass to systems that have no net momentum is one of the characteristic and notable consequences of relativity. It has no counterpart in classical Newtonian physics, in which radiation, light, heat, and kinetic energy never exhibit weighable mass under any circumstances.
Just as the relativistic mass of an isolated system is conserved through time, so also is its invariant mass. This property allows the conservation of all types of mass in systems, and also conservation of all types of mass in reactions where matter is destroyed (annihilated), leaving behind the energy that was associated with it (which is now in non-material form, rather than material form). Matter may appear and disappear in various reactions, but mass and energy are both unchanged in this process.
As is noted above, two different definitions of mass have been used in special relativity, and also two different definitions of energy. The simple equation E = m c 2 (\ displaystyle E = mc ^ (2)) is not generally applicable to all these types of mass and energy, except in the special case that the total additive momentum is zero for the system under consideration. In such a case, which is always guaranteed when observing the system from either its center of mass frame or its center of momentum frame, E = m c 2 (\ displaystyle E = mc ^ (2)) is always true for any type of mass and energy that are chosen. Thus, for example, in the center of mass frame, the total energy of an object or system is equal to its rest mass times c 2 (\ displaystyle c ^ (2)), a useful equality. This is the relationship used for the container of gas in the previous example. It is not true in other reference frames where the center of mass is in motion. In these systems or for such an object, its total energy depends on both its rest (or invariant) mass, and its (total) momentum.
In inertial reference frames other than the rest frame or center of mass frame, the equation E = m c 2 (\ displaystyle E = mc ^ (2)) remains true if the energy is the relativistic energy and the mass is the relativistic mass. It is also correct if the energy is the rest or invariant energy (also the minimum energy), and the mass is the rest mass, or the invariant mass. However, connection of the total or relativistic energy (E r (\ displaystyle E_ (r))) with the rest or invariant mass (m 0 (\ displaystyle m_ (0))) requires consideration of the system total momentum, in systems and reference frames where the total momentum (of magnitude p) has a non-zero value. The formula then required to connect the two different kinds of mass and energy, is the extended version of Einstein 's equation, called the relativistic energy -- momentum relation:
or
Here the p c 2 (\ displaystyle pc ^ (2)) term represents the square of the Euclidean norm (total vector length) of the various momentum vectors in the system, which reduces to the square of the simple momentum magnitude, if only a single particle is considered. This equation reduces to E = m c 2 (\ displaystyle E = mc ^ (2)) when the momentum term is zero. For photons where m 0 = 0 (\ displaystyle m_ (0) = 0), the equation reduces to E r = p c (\ displaystyle E_ (r) = pc).
Mass -- energy equivalence states that any object has a certain energy, even when it is stationary. In Newtonian mechanics, a motionless body has no kinetic energy, and it may or may not have other amounts of internal stored energy, like chemical energy or thermal energy, in addition to any potential energy it may have from its position in a field of force. In Newtonian mechanics, all of these energies are much smaller than the mass of the object times the speed of light squared.
In relativity, all the energy that moves with an object (that is, all the energy present in the object 's rest frame) contributes to the total mass of the body, which measures how much it resists acceleration. Each bit of potential and kinetic energy makes a proportional contribution to the mass. As noted above, even if a box of ideal mirrors "contains '' light, then the individually massless photons still contribute to the total mass of the box, by the amount of their energy divided by c.
In relativity, removing energy is removing mass, and for an observer in the center of mass frame, the formula m = E / c indicates how much mass is lost when energy is removed. In a nuclear reaction, the mass of the atoms that come out is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light with the same relativistic mass as the difference (and also the same invariant mass in the center of mass frame of the system). In this case, the E in the formula is the energy released and removed, and the mass m is how much the mass decreases. In the same way, when any sort of energy is added to an isolated system, the increase in the mass is equal to the added energy divided by c. For example, when water is heated it gains about 6983111000000000000 ♠ 1.11 × 10 kg of mass for every joule of heat added to the water.
An object moves with different speed in different frames, depending on the motion of the observer, so the kinetic energy in both Newtonian mechanics and relativity is frame dependent. This means that the amount of relativistic energy, and therefore the amount of relativistic mass, that an object is measured to have depends on the observer. The rest mass is defined as the mass that an object has when it is not moving (or when an inertial frame is chosen such that it is not moving). The term also applies to the invariant mass of systems when the system as a whole is not "moving '' (has no net momentum). The rest and invariant masses are the smallest possible value of the mass of the object or system. They also are conserved quantities, so long as the system is isolated. Because of the way they are calculated, the effects of moving observers are subtracted, so these quantities do not change with the motion of the observer.
The rest mass is almost never additive: the rest mass of an object is not the sum of the rest masses of its parts. The rest mass of an object is the total energy of all the parts, including kinetic energy, as measured by an observer that sees the center of the mass of the object to be standing still. The rest mass adds up only if the parts are standing still and do not attract or repel, so that they do not have any extra kinetic or potential energy. The other possibility is that they have a positive kinetic energy and a negative potential energy that exactly cancels.
Whenever any type of energy is removed from a system, the mass associated with the energy is also removed, and the system therefore loses mass. This mass defect in the system may be simply calculated as Δm = ΔE / c, and this was the form of the equation historically first presented by Einstein in 1905. However, use of this formula in such circumstances has led to the false idea that mass has been "converted '' to energy. This may be particularly the case when the energy (and mass) removed from the system is associated with the binding energy of the system. In such cases, the binding energy is observed as a "mass defect '' or deficit in the new system.
The fact that the released energy is not easily weighed in many such cases, may cause its mass to be neglected as though it no longer existed. This circumstance has encouraged the false idea of conversion of mass to energy, rather than the correct idea that the binding energy of such systems is relatively large, and exhibits a measurable mass, which is removed when the binding energy is removed..
The difference between the rest mass of a bound system and of the unbound parts is the binding energy of the system, if this energy has been removed after binding. For example, a water molecule weighs a little less than two free hydrogen atoms and an oxygen atom. The minuscule mass difference is the energy needed to split the molecule into three individual atoms (divided by c), which was given off as heat when the molecule formed (this heat had mass). Likewise, a stick of dynamite in theory weighs a little bit more than the fragments after the explosion, but this is true only so long as the fragments are cooled and the heat removed. In this case the mass difference is the energy / heat that is released when the dynamite explodes, and when this heat escapes, the mass associated with it escapes, only to be deposited in the surroundings, which absorb the heat (so that total mass is conserved).
Such a change in mass may only happen when the system is open, and the energy and mass escapes. Thus, if a stick of dynamite is blown up in a hermetically sealed chamber, the mass of the chamber and fragments, the heat, sound, and light would still be equal to the original mass of the chamber and dynamite. If sitting on a scale, the weight and mass would not change. This would in theory also happen even with a nuclear bomb, if it could be kept in an ideal box of infinite strength, which did not rupture or pass radiation. Thus, a 21.5 kiloton (7013900000000000000 ♠ 9 × 10 joule) nuclear bomb produces about one gram of heat and electromagnetic radiation, but the mass of this energy would not be detectable in an exploded bomb in an ideal box sitting on a scale; instead, the contents of the box would be heated to millions of degrees without changing total mass and weight. If then, however, a transparent window (passing only electromagnetic radiation) were opened in such an ideal box after the explosion, and a beam of X-rays and other lower - energy light allowed to escape the box, it would eventually be found to weigh one gram less than it had before the explosion. This weight loss and mass loss would happen as the box was cooled by this process, to room temperature. However, any surrounding mass that absorbed the X-rays (and other "heat '') would gain this gram of mass from the resulting heating, so the mass "loss '' would represent merely its relocation. Thus, no mass (or, in the case of a nuclear bomb, no matter) would be "converted '' to energy in such a process. Mass and energy, as always, would both be separately conserved.
Massless particles have zero rest mass. Their relativistic mass is simply their relativistic energy, divided by c, or m = E / c. The energy for photons is E = hf, where h is Planck 's constant and f is the photon frequency. This frequency and thus the relativistic energy are frame - dependent.
If an observer runs away from a photon in the direction the photon travels from a source, and it catches up with the observer -- when the photon catches up, the observer sees it as having less energy than it had at the source. The faster the observer is traveling with regard to the source when the photon catches up, the less energy the photon has. As an observer approaches the speed of light with regard to the source, the photon looks redder and redder, by relativistic Doppler effect (the Doppler shift is the relativistic formula), and the energy of a very long - wavelength photon approaches zero. This is because the photon is massless -- the rest mass of a photon is zero.
Two photons moving in different directions can not both be made to have arbitrarily small total energy by changing frames, or by moving toward or away from them. The reason is that in a two - photon system, the energy of one photon is decreased by chasing after it, but the energy of the other increases with the same shift in observer motion. Two photons not moving in the same direction comprise an inertial frame where the combined energy is smallest, but not zero. This is called the center of mass frame or the center of momentum frame; these terms are almost synonyms (the center of mass frame is the special case of a center of momentum frame where the center of mass is put at the origin). The most that chasing a pair of photons can accomplish to decrease their energy is to put the observer in a frame where the photons have equal energy and are moving directly away from each other. In this frame, the observer is now moving in the same direction and speed as the center of mass of the two photons. The total momentum of the photons is now zero, since their momenta are equal and opposite. In this frame the two photons, as a system, have a mass equal to their total energy divided by c. This mass is called the invariant mass of the pair of photons together. It is the smallest mass and energy the system may be seen to have, by any observer. It is only the invariant mass of a two - photon system that can be used to make a single particle with the same rest mass.
If the photons are formed by the collision of a particle and an antiparticle, the invariant mass is the same as the total energy of the particle and antiparticle (their rest energy plus the kinetic energy), in the center of mass frame, where they automatically move in equal and opposite directions (since they have equal momentum in this frame). If the photons are formed by the disintegration of a single particle with a well - defined rest mass, like the neutral pion, the invariant mass of the photons is equal to rest mass of the pion. In this case, the center of mass frame for the pion is just the frame where the pion is at rest, and the center of mass does not change after it disintegrates into two photons. After the two photons are formed, their center of mass is still moving the same way the pion did, and their total energy in this frame adds up to the mass energy of the pion. Thus, by calculating the invariant mass of pairs of photons in a particle detector, pairs can be identified that were probably produced by pion disintegration.
A similar calculation illustrates that the invariant mass of systems is conserved, even when massive particles (particles with rest mass) within the system are converted to massless particles (such as photons). In such cases, the photons contribute invariant mass to the system, even though they individually have no invariant mass or rest mass. Thus, an electron and positron (each of which has rest mass) may undergo annihilation with each other to produce two photons, each of which is massless (has no rest mass). However, in such circumstances, no system mass is lost. Instead, the system of both photons moving away from each other has an invariant mass, which acts like a rest mass for any system in which the photons are trapped, or that can be weighed. Thus, not only the quantity of relativistic mass, but also the quantity of invariant mass does not change in transformations between "matter '' (electrons and positrons) and energy (photons).
In physics, there are two distinct concepts of mass: the gravitational mass and the inertial mass. The gravitational mass is the quantity that determines the strength of the gravitational field generated by an object, as well as the gravitational force acting on the object when it is immersed in a gravitational field produced by other bodies. The inertial mass, on the other hand, quantifies how much an object accelerates if a given force is applied to it. The mass -- energy equivalence in special relativity refers to the inertial mass. However, already in the context of Newton gravity, the Weak Equivalence Principle is postulated: the gravitational and the inertial mass of every object are the same. Thus, the mass -- energy equivalence, combined with the Weak Equivalence Principle, results in the prediction that all forms of energy contribute to the gravitational field generated by an object. This observation is one of the pillars of the general theory of relativity.
The above prediction, that all forms of energy interact gravitationally, has been subject to experimental tests. The first observation testing this prediction was made in 1919. During a solar eclipse, Arthur Eddington observed that the light from stars passing close to the Sun was bent. The effect is due to the gravitational attraction of light by the Sun. The observation confirmed that the energy carried by light indeed is equivalent to a gravitational mass. Another seminal experiment, the Pound -- Rebka experiment, was performed in 1960. In this test a beam of light was emitted from the top of a tower and detected at the bottom. The frequency of the light detected was higher than the light emitted. This result confirms that the energy of photons increases when they fall in the gravitational field of the Earth. The energy, and therefore the gravitational mass, of photons is proportional to their frequency as stated by the Planck 's relation.
Max Planck pointed out that the mass -- energy equivalence formula implied that bound systems would have a mass less than the sum of their constituents, once the binding energy had been allowed to escape. However, Planck was thinking about chemical reactions, where the binding energy is too small to measure. Einstein suggested that radioactive materials such as radium would provide a test of the theory, but even though a large amount of energy is released per atom in radium, due to the half - life of the substance (1602 years), only a small fraction of radium atoms decay over an experimentally measurable period of time.
Once the nucleus was discovered, experimenters realized that the very high binding energies of the atomic nuclei should allow calculation of their binding energies, simply from mass differences. But it was not until the discovery of the neutron in 1932, and the measurement of the neutron mass, that this calculation could actually be performed (see nuclear binding energy for example calculation). A little while later, the Cockcroft -- Walton accelerator produced the first transmutation reaction (Li + p → 2 He), verifying Einstein 's formula to an accuracy of ± 0.5 %. In 2005, Rainville et al. published a direct test of the energy - equivalence of mass lost in the binding energy of a neutron to atoms of particular isotopes of silicon and sulfur, by comparing the mass lost to the energy of the emitted gamma ray associated with the neutron capture. The binding mass - loss agreed with the gamma ray energy to a precision of ± 0.00004 %, the most accurate test of E = mc to date.
The mass -- energy equivalence formula was used in the understanding of nuclear fission reactions, and implies the great amount of energy that can be released by a nuclear fission chain reaction, used in both nuclear weapons and nuclear power. By measuring the mass of different atomic nuclei and subtracting from that number the total mass of the protons and neutrons as they would weigh separately, one gets the exact binding energy available in an atomic nucleus. This is used to calculate the energy released in any nuclear reaction, as the difference in the total mass of the nuclei that enter and exit the reaction.
Einstein used the CGS system of units (centimeters, grams, seconds, dynes, and ergs), but the formula is independent of the system of units. In natural units, the numerical value of the speed of light is set to equal 1, and the formula expresses an equality of numerical values: E = m. In the SI system (expressing the ratio E / m in joules per kilogram using the value of c in meters per second):
So the energy equivalent of one kilogram of mass is
or the energy released by combustion of the following:
Any time energy is generated, the process can be evaluated from an E = mc perspective. For instance, the "Gadget '' - style bomb used in the Trinity test and the bombing of Nagasaki had an explosive yield equivalent to 21 kt of TNT. About 1 kg of the approximately 6.15 kg of plutonium in each of these bombs fissioned into lighter elements totaling almost exactly one gram less, after cooling. The electromagnetic radiation and kinetic energy (thermal and blast energy) released in this explosion carried the missing one gram of mass. This occurs because nuclear binding energy is released whenever elements with more than 62 nucleons fission.
Another example is hydroelectric generation. The electrical energy produced by Grand Coulee Dam 's turbines every 3.7 hours represents one gram of mass. This mass passes to electrical devices (such as lights in cities) powered by the generators, where it appears as a gram of heat and light. Turbine designers look at their equations in terms of pressure, torque, and RPM. However, Einstein 's equations show that all energy has mass, and thus the electrical energy produced by a dam 's generators, and the resulting heat and light, all retain their mass -- which is equivalent to the energy. The potential energy -- and equivalent mass -- represented by the waters of the Columbia River as it descends to the Pacific Ocean would be converted to heat due to viscous friction and the turbulence of white water rapids and waterfalls were it not for the dam and its generators. This heat would remain as mass on site at the water, were it not for the equipment that converted some of this potential and kinetic energy into electrical energy, which can move from place to place (taking mass with it).
Whenever energy is added to a system, the system gains mass, as shown when the equation is rearranged:
Note that no net mass or energy is really created or lost in any of these examples and scenarios. Mass / energy simply moves from one place to another. These are some examples of the transfer of energy and mass in accordance with the principle of mass -- energy conservation.
Although mass can not be converted to energy, in some reactions matter particles (which contain a form of rest energy) can be destroyed and the energy released can be converted to other types of energy that are more usable and obvious as forms of energy -- such as light and energy of motion (heat, etc.). However, the total amount of energy and mass does not change in such a transformation. Even when particles are not destroyed, a certain fraction of the ill - defined "matter '' in ordinary objects can be destroyed, and its associated energy liberated and made available as the more dramatic energies of light and heat, even though no identifiable real particles are destroyed, and even though (again) the total energy is unchanged (as also the total mass). Such conversions between types of energy (resting to active energy) happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their average mass, but this mass loss is not due to the destruction of any protons or neutrons (or even, in general, lighter particles like electrons). Also the mass is not destroyed, but simply removed from the system in the form of heat and light from the reaction.
In nuclear reactions, typically only a small fraction of the total mass -- energy of the bomb converts into the mass -- energy of heat, light, radiation, and motion -- which are "active '' forms that can be used. When an atom fissions, it loses only about 0.1 % of its mass (which escapes from the system and does not disappear), and additionally, in a bomb or reactor not all the atoms can fission. In a modern fission - based atomic bomb, the efficiency is only about 40 %, so only 40 % of the fissionable atoms actually fission, and only about 0.03 % of the fissile core mass appears as energy in the end. In nuclear fusion, more of the mass is released as usable energy, roughly 0.3 %. But in a fusion bomb, the bomb mass is partly casing and non-reacting components, so that in practicality, again (coincidentally) no more than about 0.03 % of the total mass of the entire weapon is released as usable energy (which, again, retains the "missing '' mass). See nuclear weapon yield for practical details of this ratio in modern nuclear weapons.
In theory, it should be possible to destroy matter and convert all of the rest - energy associated with matter into heat and light (which would of course have the same mass), but none of the theoretically known methods are practical. One way to convert all the energy within matter into usable energy is to annihilate matter with antimatter. But antimatter is rare in our universe, and must be made first. Due to inefficient mechanisms of production, making antimatter always requires far more usable energy than would be released when it was annihilated.
Since most of the mass of ordinary objects resides in protons and neutrons, converting all the energy of ordinary matter into more useful energy requires that the protons and neutrons be converted to lighter particles, or particles with no rest - mass at all. In the Standard Model of particle physics, the number of protons plus neutrons is nearly exactly conserved. Still, Gerard ' t Hooft showed that there is a process that converts protons and neutrons to antielectrons and neutrinos. This is the weak SU (2) instanton proposed by Belavin Polyakov Schwarz and Tyupkin. This process, can in principle destroy matter and convert all the energy of matter into neutrinos and usable energy, but it is normally extraordinarily slow. Later it became clear that this process happens at a fast rate at very high temperatures, since then, instanton - like configurations are copiously produced from thermal fluctuations. The temperature required is so high that it would only have been reached shortly after the big bang.
Many extensions of the standard model contain magnetic monopoles, and in some models of grand unification, these monopoles catalyze proton decay, a process known as the Callan -- Rubakov effect. This process would be an efficient mass -- energy conversion at ordinary temperatures, but it requires making monopoles and anti-monopoles first. The energy required to produce monopoles is believed to be enormous, but magnetic charge is conserved, so that the lightest monopole is stable. All these properties are deduced in theoretical models -- magnetic monopoles have never been observed, nor have they been produced in any experiment so far.
A third known method of total matter -- energy "conversion '' (which again in practice only means conversion of one type of energy into a different type of energy), is using gravity, specifically black holes. Stephen Hawking theorized that black holes radiate thermally with no regard to how they are formed. So, it is theoretically possible to throw matter into a black hole and use the emitted heat to generate power. According to the theory of Hawking radiation, however, the black hole used radiates at a higher rate the smaller it is, producing usable powers at only small black hole masses, where usable may for example be something greater than the local background radiation. It is also worth noting that the ambient irradiated power would change with the mass of the black hole, increasing as the mass of the black hole decreases, or decreasing as the mass increases, at a rate where power is proportional to the inverse square of the mass. In a "practical '' scenario, mass and energy could be dumped into the black hole to regulate this growth, or keep its size, and thus power output, near constant. This could result from the fact that mass and energy are lost from the hole with its thermal radiation.
In developing special relativity, Einstein found that the kinetic energy of a moving body is
with v the velocity, m the rest mass, and γ the Lorentz factor.
He included the second term on the right to make sure that for small velocities the energy would be the same as in classical mechanics, thus satisfying the correspondence principle:
Without this second term, there would be an additional contribution in the energy when the particle is not moving.
Einstein found that the total momentum of a moving particle is:
It is this quantity that is conserved in collisions. The ratio of the momentum to the velocity is the relativistic mass, m.
And the relativistic mass and the relativistic kinetic energy are related by the formula:
Einstein wanted to omit the unnatural second term on the right - hand side, whose only purpose is to make the energy at rest zero, and to declare that the particle has a total energy, which obeys:
which is a sum of the rest energy m c and the kinetic energy. This total energy is mathematically more elegant, and fits better with the momentum in relativity. But to come to this conclusion, Einstein needed to think carefully about collisions. This expression for the energy implied that matter at rest has a huge amount of energy, and it is not clear whether this energy is physically real, or just a mathematical artifact with no physical meaning.
In a collision process where all the rest - masses are the same at the beginning as at the end, either expression for the energy is conserved. The two expressions only differ by a constant that is the same at the beginning and at the end of the collision. Still, by analyzing the situation where particles are thrown off a heavy central particle, it is easy to see that the inertia of the central particle is reduced by the total energy emitted. This allowed Einstein to conclude that the inertia of a heavy particle is increased or diminished according to the energy it absorbs or emits.
After Einstein first made his proposal, it became clear that the word mass can have two different meanings. Some denote the relativistic mass with an explicit index:
This mass is the ratio of momentum to velocity, and it is also the relativistic energy divided by c (it is not Lorentz - invariant, in contrast to m 0 (\ displaystyle m_ (0))). The equation E = m c holds for moving objects. When the velocity is small, the relativistic mass and the rest mass are almost exactly the same.
Also Einstein (following Hendrik Lorentz and Max Abraham) used velocity - and direction - dependent mass concepts (longitudinal and transverse mass) in his 1905 electrodynamics paper and in another paper in 1906. However, in his first paper on E = mc (1905), he treated m as what would now be called the rest mass. Some claim that (in later years) he did not like the idea of "relativistic mass ''. When modern physicists say "mass '', they are usually talking about rest mass, since if they meant "relativistic mass '', they would just say "energy ''.
Considerable debate has ensued over the use of the concept "relativistic mass '' and the connection of "mass '' in relativity to "mass '' in Newtonian dynamics. For example, one view is that only rest mass is a viable concept and is a property of the particle; while relativistic mass is a conglomeration of particle properties and properties of spacetime. A perspective that avoids this debate, due to Kjell Vøyenli, is that the Newtonian concept of mass as a particle property and the relativistic concept of mass have to be viewed as embedded in their own theories and as having no precise connection.
We can rewrite the expression E = γm c as a Taylor series:
For speeds much smaller than the speed of light, higher - order terms in this expression get smaller and smaller because v / c is small. For low speeds we can ignore all but the first two terms:
The total energy is a sum of the rest energy and the Newtonian kinetic energy.
The classical energy equation ignores both the m c part, and the high - speed corrections. This is appropriate, because all the high - order corrections are small. Since only changes in energy affect the behavior of objects, whether we include the m c part makes no difference, since it is constant. For the same reason, it is possible to subtract the rest energy from the total energy in relativity. By considering the emission of energy in different frames, Einstein could show that the rest energy has a real physical meaning.
The higher - order terms are extra corrections to Newtonian mechanics, and become important at higher speeds. The Newtonian equation is only a low - speed approximation, but an extraordinarily good one. All of the calculations used in putting astronauts on the moon, for example, could have been done using Newton 's equations without any of the higher - order corrections. The total mass energy equivalence should also include the rotational and vibrational kinetic energies as well as the linear kinetic energy at low speeds.
While Einstein was the first to have correctly deduced the mass -- energy equivalence formula, he was not the first to have related energy with mass. But nearly all previous authors thought that the energy that contributes to mass comes only from electromagnetic fields.
In 1717 Isaac Newton speculated that light particles and matter particles were interconvertible in "Query 30 '' of the Opticks, where he asks:
Are not the gross bodies and light convertible into one another, and may not bodies receive much of their activity from the particles of light which enter their composition?
In 1734 the Swedish scientist and theologian Emanuel Swedenborg in his Principia theorized that all matter is ultimately composed of dimensionless points of "pure and total motion. '' He described this motion as being without force, direction or speed, but having the potential for force, direction and speed everywhere within it.
There were many attempts in the 19th and the beginning of the 20th century -- like those of J.J. Thomson (1881), Oliver Heaviside (1888), and George Frederick Charles Searle (1897), Wilhelm Wien (1900), Max Abraham (1902), Hendrik Antoon Lorentz (1904) -- to understand how the mass of a charged object depends on the electrostatic field. This concept was called electromagnetic mass, and was considered as being dependent on velocity and direction as well. Lorentz (1904) gave the following expressions for longitudinal and transverse electromagnetic mass:
where
Another way of deriving some sort of electromagnetic mass was based on the concept of radiation pressure. In 1900, Henri Poincaré associated electromagnetic radiation energy with a "fictitious fluid '' having momentum and mass
By that, Poincaré tried to save the center of mass theorem in Lorentz 's theory, though his treatment led to radiation paradoxes.
Friedrich Hasenöhrl showed in 1904, that electromagnetic cavity radiation contributes the "apparent mass ''
to the cavity 's mass. He argued that this implies mass dependence on temperature as well.
Albert Einstein did not formulate exactly the formula E = mc in his 1905 Annus Mirabilis paper "Does the Inertia of an object Depend Upon Its Energy Content? ''; rather, the paper states that if a body gives off the energy L in the form of radiation, its mass diminishes by L / c. (Here, "radiation '' means electromagnetic radiation, or light, and mass means the ordinary Newtonian mass of a slow - moving object.) This formulation relates only a change Δm in mass to a change L in energy without requiring the absolute relationship.
Objects with zero mass presumably have zero energy, so the extension that all mass is proportional to energy is obvious from this result. In 1905, even the hypothesis that changes in energy are accompanied by changes in mass was untested. Not until the discovery of the first type of antimatter (the positron in 1932) was it found that all of the mass of pairs of resting particles could be converted to radiation.
Already in his relativity paper "On the electrodynamics of moving bodies '', Einstein derived the correct expression for the kinetic energy of particles:
Now the question remained open as to which formulation applies to bodies at rest. This was tackled by Einstein in his paper "Does the inertia of a body depend upon its energy content? '', where he used a body emitting two light pulses in opposite directions, having energies of E before and E after the emission as seen in its rest frame. As seen from a moving frame, this becomes H and H. Einstein obtained:
then he argued that H − E can only differ from the kinetic energy K by an additive constant, which gives
Neglecting effects higher than third order in v / c after a Taylor series expansion of the right side of this gives:
Einstein concluded that the emission reduces the body 's mass by E / c, and that the mass of a body is a measure of its energy content.
The correctness of Einstein 's 1905 derivation of E = mc was criticized by Max Planck (1907), who argued that it is only valid to first approximation. Another criticism was formulated by Herbert Ives (1952) and Max Jammer (1961), asserting that Einstein 's derivation is based on begging the question. On the other hand, John Stachel and Roberto Torretti (1982) argued that Ives ' criticism was wrong, and that Einstein 's derivation was correct. Hans Ohanian (2008) agreed with Stachel / Torretti 's criticism of Ives, though he argued that Einstein 's derivation was wrong for other reasons. For a recent review, see Hecht (2011).
An alternative version of Einstein 's thought experiment was proposed by Fritz Rohrlich (1990), who based his reasoning on the Doppler effect. Like Einstein, he considered a body at rest with mass M. If the body is examined in a frame moving with nonrelativistic velocity v, it is no longer at rest and in the moving frame it has momentum P = Mv. Then he supposed the body emits two pulses of light to the left and to the right, each carrying an equal amount of energy E / 2. In its rest frame, the object remains at rest after the emission since the two beams are equal in strength and carry opposite momentum.
However, if the same process is considered in a frame that moves with velocity v to the left, the pulse moving to the left is redshifted, while the pulse moving to the right is blue shifted. The blue light carries more momentum than the red light, so that the momentum of the light in the moving frame is not balanced: the light is carrying some net momentum to the right.
The object has not changed its velocity before or after the emission. Yet in this frame it has lost some right - momentum to the light. The only way it could have lost momentum is by losing mass. This also solves Poincaré 's radiation paradox, discussed above.
The velocity is small, so the right - moving light is blueshifted by an amount equal to the nonrelativistic Doppler shift factor 1 − v / c. The momentum of the light is its energy divided by c, and it is increased by a factor of v / c. So the right - moving light is carrying an extra momentum ΔP given by:
The left - moving light carries a little less momentum, by the same amount ΔP. So the total right - momentum in the light is twice ΔP. This is the right - momentum that the object lost.
The momentum of the object in the moving frame after the emission is reduced to this amount:
So the change in the object 's mass is equal to the total energy lost divided by c. Since any emission of energy can be carried out by a two step process, where first the energy is emitted as light and then the light is converted to some other form of energy, any emission of energy is accompanied by a loss of mass. Similarly, by considering absorption, a gain in energy is accompanied by a gain in mass.
Like Poincaré, Einstein concluded in 1906 that the inertia of electromagnetic energy is a necessary condition for the center - of - mass theorem to hold. On this occasion, Einstein referred to Poincaré 's 1900 paper and wrote:
Although the merely formal considerations, which we will need for the proof, are already mostly contained in a work by H. Poincaré, for the sake of clarity I will not rely on that work.
In Einstein 's more physical, as opposed to formal or mathematical, point of view, there was no need for fictitious masses. He could avoid the perpetuum mobile problem because, on the basis of the mass -- energy equivalence, he could show that the transport of inertia that accompanies the emission and absorption of radiation solves the problem. Poincaré 's rejection of the principle of action -- reaction can be avoided through Einstein 's E = mc, because mass conservation appears as a special case of the energy conservation law.
During the nineteenth century there were several speculative attempts to show that mass and energy were proportional in various ether theories. In 1873 Nikolay Umov pointed out a relation between mass and energy for ether in the form of Е = kmc, where 0.5 ≤ k ≤ 1. The writings of Samuel Tolver Preston, and a 1903 paper by Olinto De Pretto, presented a mass -- energy relation. Bartocci (1999) observed that there were only three degrees of separation linking De Pretto to Einstein, concluding that Einstein was probably aware of De Pretto 's work.
Preston and De Pretto, following Le Sage, imagined that the universe was filled with an ether of tiny particles that always move at speed c. Each of these particles has a kinetic energy of mc up to a small numerical factor. The nonrelativistic kinetic energy formula did not always include the traditional factor of 1 / 2, since Leibniz introduced kinetic energy without it, and the 1 / 2 is largely conventional in prerelativistic physics. By assuming that every particle has a mass that is the sum of the masses of the ether particles, the authors concluded that all matter contains an amount of kinetic energy either given by E = mc or 2E = mc depending on the convention. A particle ether was usually considered unacceptably speculative science at the time, and since these authors did not formulate relativity, their reasoning is completely different from that of Einstein, who used relativity to change frames.
Independently, Gustave Le Bon in 1905 speculated that atoms could release large amounts of latent energy, reasoning from an all - encompassing qualitative philosophy of physics.
It was quickly noted after the discovery of radioactivity in 1897, that the total energy due to radioactive processes is about one million times greater than that involved in any known molecular change. However, it raised the question where this energy is coming from. After eliminating the idea of absorption and emission of some sort of Lesagian ether particles, the existence of a huge amount of latent energy, stored within matter, was proposed by Ernest Rutherford and Frederick Soddy in 1903. Rutherford also suggested that this internal energy is stored within normal matter as well. He went on to speculate in 1904:
If it were ever found possible to control at will the rate of disintegration of the radio - elements, an enormous amount of energy could be obtained from a small quantity of matter.
Einstein 's equation is in no way an explanation of the large energies released in radioactive decay (this comes from the powerful nuclear forces involved; forces that were still unknown in 1905). In any case, the enormous energy released from radioactive decay (which had been measured by Rutherford) was much more easily measured than the (still small) change in the gross mass of materials as a result. Einstein 's equation, by theory, can give these energies by measuring mass differences before and after reactions, but in practice, these mass differences in 1905 were still too small to be measured in bulk. Prior to this, the ease of measuring radioactive decay energies with a calorimeter was thought possibly likely to allow measurement of changes in mass difference, as a check on Einstein 's equation itself. Einstein mentions in his 1905 paper that mass -- energy equivalence might perhaps be tested with radioactive decay, which releases enough energy (the quantitative amount known roughly by 1905) to possibly be "weighed, '' when missing from the system (having been given off as heat). However, radioactivity seemed to proceed at its own unalterable (and quite slow, for radioactives known then) pace, and even when simple nuclear reactions became possible using proton bombardment, the idea that these great amounts of usable energy could be liberated at will with any practicality, proved difficult to substantiate. Rutherford was reported in 1933 to have declared that this energy could not be exploited efficiently: "Anyone who expects a source of power from the transformation of the atom is talking moonshine. ''
This situation changed dramatically in 1932 with the discovery of the neutron and its mass, allowing mass differences for single nuclides and their reactions to be calculated directly, and compared with the sum of masses for the particles that made up their composition. In 1933, the energy released from the reaction of lithium - 7 plus protons giving rise to 2 alpha particles (as noted above by Rutherford), allowed Einstein 's equation to be tested to an error of ± 0.5 %. However, scientists still did not see such reactions as a practical source of power, due to the energy cost of accelerating reaction particles.
After the very public demonstration of huge energies released from nuclear fission after the atomic bombings of Hiroshima and Nagasaki in 1945, the equation E = mc became directly linked in the public eye with the power and peril of nuclear weapons. The equation was featured as early as page 2 of the Smyth Report, the official 1945 release by the US government on the development of the atomic bomb, and by 1946 the equation was linked closely enough with Einstein 's work that the cover of Time magazine prominently featured a picture of Einstein next to an image of a mushroom cloud emblazoned with the equation. Einstein himself had only a minor role in the Manhattan Project: he had cosigned a letter to the U.S. President in 1939 urging funding for research into atomic energy, warning that an atomic bomb was theoretically possible. The letter persuaded Roosevelt to devote a significant portion of the wartime budget to atomic research. Without a security clearance, Einstein 's only scientific contribution was an analysis of an isotope separation method in theoretical terms. It was inconsequential, on account of Einstein not being given sufficient information (for security reasons) to fully work on the problem.
While E = mc is useful for understanding the amount of energy potentially released in a fission reaction, it was not strictly necessary to develop the weapon, once the fission process was known, and its energy measured at 200 MeV (which was directly possible, using a quantitative Geiger counter, at that time). As the physicist and Manhattan Project participant Robert Serber put it: "Somehow the popular notion took hold long ago that Einstein 's theory of relativity, in particular his famous equation E = mc, plays some essential role in the theory of fission. Albert Einstein had a part in alerting the United States government to the possibility of building an atomic bomb, but his theory of relativity is not required in discussing fission. The theory of fission is what physicists call a non-relativistic theory, meaning that relativistic effects are too small to affect the dynamics of the fission process significantly. '' However the association between E = mc and nuclear energy has since stuck, and because of this association, and its simple expression of the ideas of Albert Einstein himself, it has become "the world 's most famous equation ''.
While Serber 's view of the strict lack of need to use mass -- energy equivalence in designing the atomic bomb is correct, it does not take into account the pivotal role this relationship played in making the fundamental leap to the initial hypothesis that large atoms were energetically allowed to split into approximately equal parts (before this energy was in fact measured). In late 1938, Lise Meitner and Otto Robert Frisch -- while on a winter walk during which they solved the meaning of Hahn 's experimental results and introduced the idea that would be called atomic fission -- directly used Einstein 's equation to help them understand the quantitative energetics of the reaction that overcame the "surface tension - like '' forces that hold the nucleus together, and allowed the fission fragments to separate to a configuration from which their charges could force them into an energetic fission. To do this, they used packing fraction, or nuclear binding energy values for elements, which Meitner had memorized. These, together with use of E = mc allowed them to realize on the spot that the basic fission process was energetically possible:
... We walked up and down in the snow, I on skis and she on foot... and gradually the idea took shape... explained by Bohr 's idea that the nucleus is like a liquid drop; such a drop might elongate and divide itself... We knew there were strong forces that would resist,... just as surface tension. But nuclei differed from ordinary drops. At this point we both sat down on a tree trunk and started to calculate on scraps of paper... the Uranium nucleus might indeed be a very wobbly, unstable drop, ready to divide itself... But,... when the two drops separated they would be driven apart by electrical repulsion, about 200 MeV in all. Fortunately Lise Meitner remembered how to compute the masses of nuclei... and worked out that the two nuclei formed... would be lighter by about one - fifth the mass of a proton. Now whenever mass disappears energy is created, according to Einstein 's formula E = mc, and... the mass was just equivalent to 200 MeV; it all fitted!
|
what is the features of unix operating system | Unix architecture - wikipedia
A Unix architecture is a computer operating system system architecture that embodies the Unix philosophy. It may adhere to standards such as the Single UNIX Specification (SUS) or similar POSIX IEEE standard. No single published standard describes all Unix architecture computer operating systems - this is in part a legacy of the Unix wars.
There are many systems which are Unix - like in their architecture. Notable among these are the GNU / Linux distributions. The distinctions between Unix and Unix - like systems have been the subject of heated legal battles, and the holders of the UNIX brand, The Open Group, object to "Unix - like '' and similar terms.
For distinctions between SUS branded UNIX architectures and other similar architectures, see Unix - like.
A Unix kernel -- the core or key components of the operating system -- consists of many kernel subsystems like process management, scheduling, file management, device management and network management, memory management, dealing with interrupts from hardware devices.
Each of the subsystems has some features:
The kernel provides these and other basic services: interrupt and trap handling, separation between user and system space, system calls, scheduling, timer and clock handling, file descriptor management.
Some key features of the Unix architecture concept are:
The UNIX operating system supports the following features and capabilities:
The UNIX - HATERS Handbook covers some of these design features as failures from the user point of view. However, although some information is quite dated and can not be applied to modern Unixes such as Linux, Eric S. Raymond discovered that several issues are still prevailing, while others were resolved. Raymond concludes that not all concepts behind Unix can be deemed as non-functional even though the book 's intention may have been to portray Unix as inferior without encouraging discussions with developers to actually fix the issues.
|
who did ryan hurst play in saving private ryan | Ryan Hurst - wikipedia
Ryan Douglas Hurst (born June 19, 1976) is an American actor, best known for his roles as Gerry Bertier in Disney 's Remember the Titans, Tom Clark in Taken, Opie Winston in the FX network drama series Sons of Anarchy, and as Chick in Bates Motel.
Hurst was born in Santa Monica, California, the son of Candace Kaniecki, an acting coach, and Rick Hurst, an actor. He attended Santa Monica High School.
Growing up in a Hollywood family, Hurst made a very early start in the show business, with a recurring role in the NBC teen situation comedy series Saved by the Bell: The New Class. In the 1998 epic war drama film Saving Private Ryan, Hurst portrayed Mandelsohn, a paratrooper who, because of temporary hearing loss, can not understand Captain Miller 's (Tom Hanks) questions about sighting Private Ryan, which forces Miller to ask the questions in writing. Additionally, he appeared in the 2002 war film We Were Soldiers as Sgt. Ernie Savage, played the football player Lump Hudson in the black comedy thriller film The Ladykillers (2004), and starred in the TNT police drama series Wanted (2005). From 2005 to 2007, Hurst gained recognition for portraying the recurring role of Allison DuBois ' half - brother, Michael Benoit, in NBC 's supernatural procedural drama series Medium.
Hurst 's big break came when he was cast as Opie Winston in the FX crime drama series Sons of Anarchy. Originally a recurring cast member in the first season, he was promoted to main cast member for the following season and went on to become a fan favorite. His character, newly released from a five - year prison stint and "living right '', but not making ends meet, goes back to SAMCRO to provide for his family, despite his wife 's objections and his knowing the risks. Hurst 's portrayal of Opie earned him the 2011 Satellite Award for Best Supporting Actor -- Series, Miniseries or Television Film. Also in 2011, Hurst voiced Jedidiah in the animated box office hit Rango. Also stars in the series, Outsiders.
In 1994, Hurst met Molly Cookson and the couple married in May 2005. Together, they founded the production company Fast Shoes. In April 2013, Hurst purchased a 3,400 square - foot home in Woodland Hills, California for $1.71 million.
|
who plays spiderman in the new spider-man homecoming | Spider - Man: Homecoming - Wikipedia
Spider - Man: Homecoming is a 2017 American superhero film based on the Marvel Comics character Spider - Man, co-produced by Columbia Pictures and Marvel Studios, and distributed by Sony Pictures Releasing. It is the second Spider - Man film reboot and the sixteenth film of the Marvel Cinematic Universe (MCU). The film is directed by Jon Watts, with a screenplay by the writing teams of Jonathan Goldstein and John Francis Daley, Watts and Christopher Ford, and Chris McKenna and Erik Sommers. Tom Holland stars as Spider - Man, alongside Michael Keaton, Jon Favreau, Zendaya, Donald Glover, Tyne Daly, Marisa Tomei and Robert Downey Jr. In Spider - Man: Homecoming, Peter Parker tries to balance high school life with being Spider - Man, while facing the Vulture.
In February 2015, Marvel Studios and Sony reached a deal to share the character rights of Spider - Man, integrating the character into the established MCU. The following June, Holland was cast as the title character, while Watts was hired to direct, followed shortly by the casting of Tomei and the hiring of Daley and Goldstein. In April 2016, the film 's title was revealed, along with additional cast, including Downey. Principal photography began in June 2016 at Pinewood Atlanta Studios in Fayette County, Georgia, and continued in Atlanta, Los Angeles and New York City. The additional screenwriters were revealed during filming, which concluded in Berlin in October 2016. The production team made efforts to differentiate the film from previous Spider - Man incarnations.
Spider - Man: Homecoming premiered in Hollywood on June 28, 2017, and was released in the United States on July 7, 2017, in 3D, IMAX and IMAX 3D. Homecoming has grossed over $879 million worldwide, making it the second most successful Spider - Man film and the fourth highest - grossing film of 2017. It received positive reviews, with critics praising Holland and the other cast 's performances, the light tone and the action sequences. A sequel is scheduled to be released on July 5, 2019.
Following the Battle of New York, Adrian Toomes and his salvage company are contracted to clean up the city, but their operation is taken over by the Department of Damage Control (D.O.D.C.), a partnership between Tony Stark and the U.S. government. Enraged at being driven out of business, Toomes persuades his employees to keep the Chitauri technology they have already scavenged and use it to create and sell advanced weapons. Eight years later, Peter Parker is drafted into the Avengers by Stark to help with an internal dispute, but resumes his studies at the Midtown School of Science and Technology when Stark tells him he is not yet ready to become a full Avenger.
Parker quits his school 's academic decathlon team to spend more time focusing on his crime - fighting activities as Spider - Man. One night, after preventing criminals from robbing an ATM with their advanced weapons from Toomes, Parker returns to his Queens apartment where his best friend Ned discovers his secret identity. On another night, Parker comes across Toomes ' associates Jackson Brice / Shocker and Herman Schultz selling weapons to local criminal Aaron Davis. Parker saves Davis before being caught by Toomes and dropped in a lake, nearly drowning after becoming tangled in a parachute built into his suit. He is rescued by Stark, who is monitoring the Spider - Man suit he gave Parker and warns him against further involvement with the criminals. Toomes accidentally kills Brice with one of their weapons, and Schultz becomes the new Shocker.
Parker and Ned study a weapon left behind by Brice, removing its power core. When a tracking device on Schultz leads to Maryland, Parker rejoins the decathlon team and accompanies them to Washington, D.C. for their national tournament. Ned and Parker disable the tracker Stark implanted in the Spider - Man suit, and unlock its advanced features. Parker tries to stop Toomes from stealing weapons from a D.O.D.C. truck, but is trapped inside the truck, causing him to miss the decathlon tournament. When he discovers that the power core is an unstable Chitauri grenade, Parker races to the Washington Monument where the core explodes and traps Ned and their friends in an elevator. Evading local authorities, Parker saves his friends, including his fellow classmate and crush Liz. Returning to New York City, Parker persuades Davis to reveal Toomes ' whereabouts. Aboard the Staten Island Ferry, Parker captures Toomes ' new buyer Mac Gargan, but Toomes escapes and a malfunctioning weapon tears the ferry in half. Stark helps Parker save the passengers before admonishing him for his recklessness and confiscating his suit.
Parker returns to his high school life, and eventually asks Liz to go to the homecoming dance with him. On the night of the dance, Parker learns that Liz is Toomes ' daughter. Deducing Parker 's secret identity, Toomes threatens retaliation if he interferes with his plans. During the dance, Parker realizes Toomes is planning to hijack a D.O.D.C. plane transporting weapons from Avengers Tower to the team 's new headquarters. He dons his old homemade Spider - Man suit and races to Toomes ' lair. He is first ambushed by Schultz, but defeats him with the help of Ned. At the lair, Toomes destroys the building 's support beams and leaves Parker to die. Parker escapes the rubble and intercepts the plane, steering it to crash on the beach near Coney Island. He and Toomes continue fighting, ending with Parker saving Toomes ' life after some unstable material explodes, and leaving him for the police along with the plane 's cargo. After her father 's arrest, Liz moves away, and Parker declines an invitation from Stark to join the Avengers full - time. Stark returns Parker 's suit, which he puts on at his apartment just as his Aunt May walks in.
In a mid-credits scene, an incarcerated Gargan approaches Toomes in prison. Gargan has heard that Toomes knows Spider - Man 's real identity, but Toomes denies this.
Additionally, Gwyneth Paltrow, Kerry Condon, and Chris Evans reprise their roles as Pepper Potts, F.R.I.D.A.Y., and Steve Rogers / Captain America from previous MCU films, respectively. Rogers appears in public service announcements played at Parker 's school. Jacob Batalon portrays Parker 's best friend Ned, a "complete gamer '', whom Batalon described as "the quintessential best guy, the best man, the number two guy, the guy in the chair '' for Parker. Marvel used Ned Leeds as a basis for the character, who does not have a last name in the script or film, but essentially created their own character with him. Carroll said that Ned and other characters in the film are composites of several of their favorites from Spider - Man comics, and while Ned may eventually wind up with the last name "Leeds '', it is not a guarantee. Laura Harrier portrays Liz, a senior, Parker 's love interest, and Toomes ' daughter, with a "type - A '' personality. Tony Revolori plays Eugene "Flash '' Thompson, Parker 's rival and classmate. It was noted that the character is generally depicted as a white bully in the comics; the Guatemalan American actor received death threats upon his casting. Revolori worked hard "to do him justice '', as he is an important character to the fans. Rather than being a physically imposing jock, Thompson was re-imagined as "a rich, smug kid '' to reflect modern views of bullying, by crafting him more into a social media bully and rival for Parker opposed to a jock; this depiction was largely informed by Holland 's visit to The Bronx High School of Science. Revolori said that Thompson has to work hard to match Parker 's intelligence, which is "one of the reasons he does n't like Peter. Everyone else seems to like Peter, so he 's like, why do n't they like me like they like him? '' Revolori gained 60 lb (27 kg) for the role.
Garcelle Beauvais portrays Doris Toomes, Adrian 's wife and Liz 's mother, and Jennifer Connelly provides the voice of Karen, the A.I. in Parker 's suit. Hemky Madera appears as Mr. Delmar, the owner of a local bodega. Bokeem Woodbine and Logan Marshall - Green both play different incarnations of Shocker, Herman Schultz and Jackson Brice respectively; both are accomplices of Toomes who use modified, vibro - blast shooting versions of Crossbones ' gauntlets. Michael Chernus plays Phineas Mason / Tinkerer, and Michael Mando appears as Mac Gargan. Faculty at Parker 's high school include: Kenneth Choi, who previously played Jim Morita in the MCU, as Jim 's descendant Principal Morita; Hannibal Buress as Coach Wilson, the school 's gym teacher, which he described as "one of the dumbass characters that do n't realize (Parker is) Spider - Man ''; Martin Starr, who previously had a non-speaking role in The Incredible Hulk identified as Amadeus Cho by the novelization for that film, as Mr. Harrington, a teacher and academic decathlon coach; Selenis Leyva as Ms. Warren; Tunde Adebimpe as Mr. Cobbwell; and John Penick as Mr. Hapgood. Parker 's classmates include: Isabella Amara as Sally; Jorge Lendeborg Jr. as Jason Ionello; J.J. Totah as Seymour; Abraham Attah as Abraham; Tiffany Espensen as Cindy; Angourie Rice as Betty Brant; Michael Barbieri as Charles; and Ethan Dizon as Tiny. Martha Kelly appears in the film as a tour guide, and Kirk Thatcher makes a cameo appearance as a "punk '', a homage to his role in Star Trek IV: The Voyage Home. Spider - Man co-creator Stan Lee also has a cameo, as a New York City apartment resident named Gary who witnesses Parker 's confrontation with a neighbor. Jona Xiao was cast in an undisclosed role, but did not appear in the final film.
Following the November 2014 hacking of Sony 's computers, emails between Sony Pictures Entertainment Co-Chairman Amy Pascal and president Doug Belgrad were released, stating that Sony wanted Marvel Studios to produce a new trilogy of Spider - Man films while Sony retained "creative control, marketing and distribution ''. Discussions between Sony and Marvel broke down, and Sony planned to proceed with its own slate of Spider - Man films. However, in February 2015, Sony Pictures and Marvel Studios announced that they would release a new Spider - Man film, with Kevin Feige and Pascal producing. The character would first appear in an earlier Marvel Cinematic Universe film, later revealed to be Captain America: Civil War. Marvel Studios would explore opportunities to integrate MCU characters into future Spider - Man films, which Sony Pictures would continue to finance, distribute, and have final creative control. Both studios have the ability to terminate the agreement at any point, and no money was exchanged with the deal. However, a small adjustment was made to a 2011 deal formed between the two studios (where Marvel gained full control of Spider - Man 's merchandising rights, in exchange for making a one - time payment of $175 million to Sony and paying up to $35 million for each future Spider - Man film, and forgoing receiving their previous 5 % of any Spider - Man film 's revenue), with Marvel getting to reduce their $35 million payment to Sony if the co-produced film grossed more than $750 million. Lone Star Funds also co-financed the film with Sony, via its LSC Film Corporation deal, covering 25 % of the $175 million budget.
Feige stated that Marvel had been working to add Spider - Man to the Marvel Cinematic Universe since at least October 2014, when they announced their full slate of Phase Three films, saying, "Marvel does n't announce anything officially until it 's set in stone. So we went forward with that Plan A in October, with the Plan B being, if (the deal) were to happen with Sony, how it would all shift. We 've been thinking about (the Spider - Man film) as long as we 've been thinking about Phase Three. '' It was said that Avi Arad and Matt Tolmach, producers for the Amazing Spider - Man series, would serve as executive producers, and neither director Marc Webb nor actor Andrew Garfield would return for the new film. Sony was reportedly looking for an actor younger than Garfield to play Spider - Man, with Logan Lerman and Dylan O'Brien considered front - runners. In March 2015, Drew Goddard was being considered to write and direct the film, while O'Brien said he had not been approached for the role. Goddard, who was previously attached to Sony film based on the Sinister Six, later said he declined to work on the new film as he thought he "did n't really have an idea '' for it, adding "it 's very hard to say, ' Ok, now write a new movie, ' '' after spending a year working on the Sinister Six film and being in that mindset. The next month, while promoting Avengers: Age of Ultron, Feige said the character of Peter Parker would be around 15 to 16 years old in the film, which would not be an origin story, since "there have been two retellings of that origin in the last (thirteen years, so) we are going to take it for granted that people know that, and the specifics. '' Parker 's Uncle Ben is referenced in the film, but not by name. Later in April, Nat Wolff, Asa Butterfield, Tom Holland, Timothée Chalamet, and Liam James were under consideration by Sony and Marvel to play Spider - Man, with Holland and Butterfield the front - runners.
In May 2015, Jonathan Levine, Ted Melfi, Jason Moore, the writing team of John Francis Daley and Jonathan Goldstein, and Jared Hess were being considered to direct the film. Butterfield, Holland, Judah Lewis, Matthew Lintz, Charlie Plummer, and Charlie Rowe screen tested for the lead role against Robert Downey Jr., who portrays Tony Stark / Iron Man in the MCU, for "chemistry ''. The six were chosen out of a search of over 1,500 actors to test in front of Feige, Pascal, and the Russo brothers, the directors of Captain America: Civil War. By early June 2015, Levine and Melfi had become the favorites to direct the film, with Daley and Goldstein, and Jon Watts also in consideration, while Feige and Pascal narrowed the actors considered to Holland and Rowe, with both screen testing with Downey again. Holland also tested with Chris Evans, who portrays Steve Rogers / Captain America in the MCU, and emerged as the favorite. On June 23, Marvel and Sony officially announced that Holland would star as Spider - Man, and that Watts would direct the film. The Russos "were pretty vocal about who (they) wanted for the part '', pushing to cast an actor close to the age of Peter Parker in order to differentiate from the previous portrayals. They also praised Holland for having a dancing and gymnastics background. Watts was able to read the Civil War script, talk with the Russos, and was on set for the filming of Spider - Man 's scenes in that film. He was able to "see what they were doing with it '' and provide "ideas about this and that '', including what Parker 's bedroom and wardrobe looked like "so that my movie transitions seamlessly with theirs. '' On joining the MCU and directing the film, Watts said, "I was really excited about that, because the other movies have shown what I described as the Penthouse level of the Marvel world, what it 's like to be Thor, Iron Man, you know, a billionaire playboy and all of that stuff. But what 's great about Spider - Man is that he 's a regular kid and so by showing his story you also get to show what the ground level is like in a world where the Avengers exist ''.
Before getting the job of director, Watts created images of Nick Fury as Parker 's mentor in the story in early "mood reels '' saying, "I do n't know what the situation would be, but that would be a person he 'd want to get in trouble with. '' Feige said the films of John Hughes would be a major influence and that Parker 's personal growth and development would be just as important as his role as Spider - Man. He noted that "at that age, in high school, everything feels like life or death. '' He also said that the film hoped to use one of Spider - Man 's rogues that have not been seen in film yet, and that filming would begin in June 2016. In July 2015, it was reported that Marisa Tomei had been offered the role of May Parker, Peter 's aunt. It was also revealed that Daley and Goldstein, after missing out on the director role, had begun negotiations to write the screenplay, and were given three days to present Marvel with their pitch; both confirmed shortly after that they had reached a deal to write the screenplay. The pair had proposed a take on the character that was "diametrically opposed '' to the previous Spider - Man films, creating a laundry list of all the elements seen in those films and actively trying to avoid re-using them in this film. They chose to focus on the high school aspects of the character and "what it would be like to be a real kid who gets superpowers '', rather than the "drama and weight of the tragedy that leads to the origin of Spider - Man ''. They felt this would differentiate him from the other MCU superheroes as well. Daley called the film "an origin story of him finding his place in the Marvel (Cinematic) Universe '', with the writing team wanting the film to "focus on (Parker) coming to terms with his new abilities and not yet being good with them, and carrying with him some real human fears and weaknesses, '' such as a fear of heights when he has to scale the Washington Monument. Daley noted, "Even within the context of this movie, I do n't think you would feel that fear of heights or even the vertigo the audience feels in that scene if you establish him as swinging from skyscrapers at the top of the movie. '' The writers also wanted to avoid the skyscrapers of Manhattan because of how often they were used in the other films, and instead wrote the character into locations such as "the suburbs, on a golf course, the Staten Island Ferry, Coney Island, and even Washington D.C. '' One of the first sequences they pitched was "seeing Spider - Man attached to a plane 10,000 feet up in the air, where he had absolutely no safety net. If you have a character that you 're so familiar with, and you 're familiar with the sort of areas he 's been in, why not turn it on its head and make it something different that people have n't seen before? '' The pair conceded that the film took a more grounded, "low - stakes '' approach than previous films, in part to Spider - Man existing in a world with the Avengers, since "if the threat became world - threatening, you would obviously bring in the big guys to handle it. ''
Marvel encouraged Daley and Goldstein to express their own sense of humor in the script, with Daley saying, "When you 're seeing the world through the eyes of a fun, funny kid, you can really embrace that voice, and not give him the cookie - cutter one - liners that you 're so accustomed to hearing from Peter Parker. '' Inspired by their experiences working on sit - coms, the writers also looked to create "a network of strong characters '' to surround Parker with in the film. In October 2015, Watts said he was looking to make the film a coming - of - age story to see the growth of Parker, citing Say Anything..., Almost Famous, and Ca n't Buy Me Love as some of his favorite films in that genre. It was this aspect of the film that had initially got Watts interested in directing it, as he had already been looking to make a coming - of - age story when he heard that the new Spider - Man would be younger than previous incarnations. Watts re-read the original Spider - Man comics in preparation for the film, and "came to a new realization about why he was so popular originally: He gave a different perspective on this world that they were building. He was introduced in the ' 60s, when they had already built a crazy spectacular Marvel Universe... to give a regular person 's perspective on it. And that ties in really nicely with what I get to do with this movie, which is '' introduce him to the MCU. Specific comics that Watts noted as potential influences were Ultimate Spider - Man and Spider - Man Loves Mary Jane. In December, Oliver Scholl signed on to be the production designer for the film.
Watts wanted to heavily pre-visualize the film, especially its action sequences, as he does on all his films. For Homecoming, Watts worked with a team to "figure out the visual language for the action sequences and just... you get to try stuff out before you 're actually on - set shooting it '' which helped Watts practice given his lack of experience working on large - scale action films. For the "web - slinging '' sequences in the film, Watts wanted to avoid the big "swoopy '' camera moves that had been previously used for such Spider - Man scenes and instead "keep it all as grounded as possible. So, whether it was shooting with a drone camera or a helicopter or a cable - cam, or even just handheld, up on a roof chasing after him, I wanted it to feel like we were there with him, and everything was something you could actually film. ''
In January 2016, Sony shifted the film 's release date from July 28 to July 7, 2017, and said the film would be digitally remastered for IMAX 3D in post-production. J.K. Simmons expressed interest in reprising his role as J. Jonah Jameson from Sam Raimi 's Spider - Man films. In early March, Zendaya was cast in the film as Michelle, and Tomei was confirmed as May Parker. The following month, Feige confirmed that characters from previous MCU films would appear, and clarified that the deal formed with Sony does not specify which characters can and can not crossover. He noted that the sharing between the studios was done with "good faith '' in order "to have more toys to play with as we put together a story '', and that "the agreement was that it is very much a Sony Pictures movie... we are the creative producers. We are the ones hiring the actor, introducing him in (Civil War), and then working right now on the script and soon to be shooting ''. Sony Pictures chairman Thomas Rothman further added that Sony has final greenlight authority, but were deferring creatively to Marvel. At CinemaCon 2016, Sony announced the title of the film to be Spider - Man: Homecoming, a reference to the common high school tradition homecoming as well as the character "coming home '' to Marvel and the MCU. Tony Revolori and Laura Harrier joined the cast as classmates of Parker 's, and Downey Jr. was revealed to be in the film as Stark. Watts noted that Stark "was always a part of '' the films ' story because of his interactions with Parker in Civil War.
Also in April, Michael Keaton entered talks to play a villain, but dropped out of discussions shortly thereafter due to scheduling conflicts with The Founder. He soon reentered talks for the role after a change in schedule for that film, and closed the deal in late May. In June, Michael Barbieri was cast as a friend of Parker 's, Kenneth Choi was cast as Parker 's high school principal, and Logan Marshall - Green was cast as another villain alongside Keaton 's character, while Donald Glover and Martin Starr joined the cast in undisclosed roles. Watts said that he wanted the cast to reflect Queens as "one of (the) most diverse places in the world '', with Feige adding that "we want everyone to recognize themselves in every portion of our universe. (With this cast) especially, it really feels like this is absolutely what has to happen and continue. '' This is also different from the previous films, which Feige described as being "set in a lily - white Queens ''. Additionally, Marvel made a conscious decision to mostly avoid including or referencing characters who appeared in previous Spider - Man films, outside of major ones like Peter and May Parker, and Flash Thompson. This included The Daily Bugle, with co-producer Eric Hauserman Carroll saying, "We toyed with it for a while, but again, we did n't want to go down that road right away, and if we do do a Daily Bugle, we want to do it in a way that feels contemporary. '' This also included the character Mary Jane Watson, but Zendaya 's Michelle was eventually given the initials "MJ '' as a nod to that character. Feige said that the point of this is "to have fun with (references) while at the same time having it be different characters that can provide a different dynamic. ''
Spider - Man 's costume in the film has more technical improvements than previous suits, including the logo on the chest being a remote drone, an AI system similar to Stark 's J.A.R.V.I.S., a holographic interface, a parachute, a tracking device for Stark to track Parker, a heater, an airbag, the ability to light up, and the ability to augment reality with the eye pieces. Stark also builds in a "training wheels '' protocol, to initially limit Parker 's access to all of its features. Carroll noted Marvel went through the comics and "pull (ed) out all the sort of fun and wacky things the suit did '' to include in the Homecoming suit. Spider - Man 's web shooters have various settings, first teased at the end of Civil War, which Carroll explained, "he can adjust the spray, and he can even scroll through different web settings, like spinning web, web ball, ricochet web... you know, all of the stuff we can see him do in the comics... It 's kind of like a DSLR camera. He can shoot without it, or he can hold that thing a second, get his aiming right, and really choose a web to shoot. ''
Principal photography began on June 20, 2016, at Pinewood Atlanta Studios in Fayette County, Georgia, under the working title Summer of George. Salvatore Totino served as director of photography. Filming also took place in Atlanta, with locations including Grady High School, Downtown Atlanta, the Atlanta Marriott Marquis, Piedmont Park, the Georgia World Congress Center, and the West End neighborhood. Holland said building New York sets in Atlanta was cheaper than actually filming in New York, a location closely associated with the character, though the production may "end up (in New York) for one week or two. '' A replica of the Staten Island Ferry was built in Atlanta, with the ability to open and close in half in 10 to 12 seconds and be flooded with 40,000 gallons of water in 8 seconds. Additional filming also occurred at two magnet schools in the Van Nuys and Reseda neighborhoods of Los Angeles.
Casting continued after the start of production, with the inclusion of Isabella Amara, Jorge Lendeborg Jr., J.J. Totah, Hannibal Buress, Selenis Leyva, Abraham Attah, Michael Mando, Tyne Daly, Garcelle Beauvais, Tiffany Espensen, and Angourie Rice in unspecified roles, with Bokeem Woodbine joining as an additional villain. At San Diego Comic - Con International 2016, Marvel confirmed the castings of Keaton, Zendaya, Glover, Harrier, Revolori, Daly and Woodbine, while revealing Zendaya, Harrier, and Revolori 's roles as Michelle, Liz Allan and Thompson, respectively, and announcing the casting of Jacob Batalon as Ned. It was also revealed that the Vulture would be the film 's villain, while the writing teams of Watts and Christopher Ford, and Chris McKenna and Erik Sommers, joined Goldstein and Daley in writing the screenplay, from Goldstein and Daley 's story. Watts praised Goldstein and Daley 's drafts as "really fun and funny '', and said that they "sort of established the broad strokes of the movie '', with he and Ford, close friends since childhood, then re-writing the script based on specific ideas that Watts had and things that he wanted to film, which he said was a "pretty substantial structural pass, rearranging things and building it into the sort of story arc we wanted it to be. '' McKenna and Sommers then joined the film to deal with changes to the script during filming, as "it 's all a little bit flexible when you get to set. You try things out, and you just need someone to be writing while you 're shooting. ''
Harrier noted that the young actors in the film "constantly refer to ourselves as The Breakfast Club. '' Shortly after, Martha Kelly joined the cast in an unspecified role. In August, Michael Chernus was cast as Phineas Mason / Tinkerer, while Jona Xiao joined the cast in an unspecified role, and Buress said he was playing a gym teacher. By September 2016, Jon Favreau was reprising his role as Happy Hogan from the Iron Man series, and filming concluded in Atlanta and moved to New York City. Locations in the latter included Astoria, Queens, St. George, Staten Island, Manhattan, and Franklin K. Lane High School in Brooklyn. Additionally, UFC fighter Tyron Woodley said he had been considered for a villain role in the film, but had to drop out due to a prior commitment with Fox Sports. Principal photography wrapped on October 2, 2016, in New York City, with some additional filming taking place later in the month in Berlin, Germany, near the Brandenburg Gate.
In November 2016, Feige confirmed that Keaton would play the Vulture, the Adrian Toomes incarnation of the character, while Woodbine was revealed as Herman Schultz / Shocker. In March 2017, Harrier said the film was undergoing reshoots, and Evans was set to appear as Steve Rogers / Captain America in an instructional fitness video. Watts was inspired by The President 's Fitness Challenge for this, feeling that Captain America would be the obvious version of that for the MCU. He then started brainstorming other public service announcements (PSA) starring Captain America, "about just everything, (like) brushing your teeth. Just anything you could think of, we had poor Captain America do it. '' Watts said that many of the additional PSA videos would be featured on the home media of the film. Watts confirmed that the company Stark creates that leads Toomes on his villainous path in the film is Damage Control, which Watts felt "just fit in with our overall philosophy with the kind of story we wanted to tell '' and created a lot of practical questions Watts wanted to use "to drive the story ''.
The film features multiple post-credit scenes. The first gives the Vulture a chance at redemption, showing him protect Parker from another villain. Watts said this "was a really interesting thing in the development of the story. You could n't just rely on the tropes of the villain being a murderer and killing a bunch of people. He had to be redeemable in some capacity in the end and that he believes everything he said, especially about his family. '' The second post-credits scene is an additional Captain America PSA, where he talks about the value of patience -- a joke at the expense of the audience, who have just waited through the film 's credits to see the scene. This was a "last - minute addition '' to the film. Watts completed work on Homecoming at the beginning of June 2017, approving the final visual effects shots. He stated that he had never been told that he could not do something by Marvel or Sony, saying, "You assume you 'll have to fight for every little weird thing you wan na do, but I did n't really ever run into that. I got to do kind of everything I wanted to. '' That month, Starr explained that he was playing the academic decathlon coach at Parker 's high school, and Marshall - Green was said to be portraying another Shocker.
In July, Feige discussed specific moments in the film, including an homage to The Amazing Spider - Man issue 33 where Parker is trapped underneath rubble, something Feige "wanted to see in a movie for a long, long time ''. Daley said that they added the scene to the script because of how much Feige wanted it, and explained, "We have (Parker) starting the scene with such self - doubt and helplessness, in a way that you really see the kid. You feel for him. He 's screaming for help, because he does n't think he can do it, and then... he kind of realizes that that 's been his biggest problem. '' Feige compared the film 's final scene, where Parker accidentally reveals that he is Spider - Man to his Aunt May, to the ending of Iron Man when Stark reveals that he is Iron Man to the world, saying, "what does that mean for the next movie? I do n't know, but it will force us to do something unique. '' Goldstein added that it "diminishes what is often the most trivial part of superhero worlds, which is finding your secret. It takes the emphasis off that (and) lets her become part of what 's really his life ''. Feige also talked about the film 's revelation that the Vulture is the father of Parker 's love interest, saying, "If that did n't work, the movie did n't work. We worked backwards and forwards from that moment. It was like two movies -- it was the movie up until then and the movie after that moment. Because it had to surprise you, but it had to be true, also. You had to believe that we had set it up so that you would buy it (and it) does n't seem like something out of left field. That 's a pretty great moment and we did n't know until we showed it to audiences '' that it would work. Watts said the revelation scene and the following interactions between the Vulture and Parker were, "more than anything else, (what) I was looking forward to, and I got to have a lot of fun shooting that stuff. '' Goldstein said the scene after the reveal, where Vulture realizes that Parker is Spider - Man while driving him to the school dance, was the moment he was most proud of in the film, and Daley said that scene 's effect on audiences was the dramatic equivalent of an audience laughing at a joke they had written. He added that the writers were "giddy when we first came up with (that twist), because it 's taking the obvious tension of meeting the father of the girl that you have a crush on, and multiplying it by 1,000, when you also realize he 's the guy you 've been trying to stop the whole time. ''
Visual effects for the film were completed by Sony Pictures Imageworks, Method Studios, Luma Pictures, Digital Domain, Cantina Creative, Iloura, Trixter, and Industrial Light & Magic. Executive producer Victoria Alonso initially did not want Imageworks, who worked on all previous Spider - Man films, to work on Homecoming in order to have a different look. She eventually changed her mind after seeing "phenomenal '' test material from the vendor. Digital Domain worked on the Staten Island Ferry battle, creating the CGI versions of Spider - Man and the first Vulture suit, Iron Man, and Spider - Man 's emblem drone, Dronie. Digital Domain was able to LIDAR an actual Staten Island Ferry, as well as the version created on set, to help with creating their digital version. They also created Iron Man for when he confronts Parker after the battle. Lou Pecora, visual effects supervisor at Digital Domain, called that sequence "brutal '' because "the way they were shot, it was lit to be a certain time of day, and afterwards it was decided to change that time of day. '' Sony Pictures Imageworks created much of the third act of the film, when Parker confronts Toomes on the plane and beach in his homemade suit and Toomes is in his upgraded Vulture suit. Some elements from Vulture 's first suit were shared with Imageworks, but the remainder was created by them based off a maquete. For the plane 's cloaking ability, Imageworks based what they created off the real world Adaptiv IR Camouflage tank cloaking system from BAE Systems, which uses a series of titles to cloak against infrared. For their web design, which was based on the one created for Civil War, Digital Domain referenced polar bear hair because of its translucent nature. Imageworks, who also looked to the Civil War webs, referenced the webs they had created for previous Spider - Man films, in which the webs had tiny barbs that aided in hooking on to things, by dialing back the barbs and referencing the other web designs created for the film. Method Studios worked on the Washington Monument sequence.
Trixter contributed over 300 shots for the film, working on the opening sequence that retold the events of Civil War from Parker 's perspective, the scene in the opening at Grand Central Terminal, a sequence during when Toomes is bringing Liz and Parker to the dance, the school battle between Parker and Schultz, and the scene around and within the Avengers compound. They also worked on both Spider - Man suits and the spider tracer. Trixter created additional salvage workers to populate the Grand Central scene, whose clothes and proportions were able to be altered to create variation. For the battle between Parker and Schultz, Trixter used an all - digital Spider - Man in his homemade suit, which came from Imageworks, with Trixter applying a rigging, muscle and cloth system to it "to mimic the appearance of the rather lose training suit ''. They also created the effects for Schultz 's gauntlets and had to change the setting from the Atlanta set to Queens, by using a CGI school and adding 360 degrees of matte paintings for the mid to far distance elements. Trixter received concept art and basic geometry that was used previously for the Avengers compound, but ended up remodeling it for the way it appear in Homecoming and created the environment around it. Models and textures for Spider - Man 's Avengers costume were created by Framestore for use in a future MCU film, which Trixter took to add to the vault they created to house the suit. Trixter VFX supervisor Dominik Zimmerle noted "The idea was to have a clean, high tech, presentation Vault for the new suit. It should appear distinctively ' Stark ' originated. ''
While promoting Doctor Strange in early November 2016, Feige accidentally revealed that Michael Giacchino, who composed the music for that film, would compose the score for Homecoming as well. Giacchino soon confirmed this himself. Recording for the soundtrack began on April 11, 2017. The score includes the theme from the 1960s animated series. The soundtrack was released by Sony Masterworks on July 7, 2017.
Spider - Man: Homecoming held its world premiere at the TCL Chinese Theater in Hollywood on June 28, 2017, and was released in the United Kingdom on July 5. It opened in additional international markets on July 6, with 23,400 screens (277 of which were IMAX) in 56 markets for its opening weekend. The film was released in the United States on July 7, in 4,348 theaters (392 were IMAX and IMAX 3D, and 601 were premium large - format), including 3D screenings. It was originally slated for release on July 28.
Watts, Holland, Batalon, Harrier, Revolori, and Zendaya appeared at the 2016 San Diego Comic - Con to show an exclusive clip of the film, which also had a panel at Comic Con Experience 2016. The first trailer for Homecoming premiered on Jimmy Kimmel Live! on December 8, 2016, and was released online alongside an international version which featured some different shots and dialogue. Feige thought there was enough of a difference between the two that "it would be fun for people to see both. '' The shots of Vulture descending through a hotel atrium and Spider - Man swinging with Iron Man flying beside him were created specifically for the trailer, not the film. Watts explained that the Vulture shot was created for Comic - Con before much of the film had been shot, and "was never meant to be in the movie '', but he was able to repurpose the angle for Vulture 's reveal in the film. The Spider - Man and Iron Man shot was created because the marketing team wanted a shot of the two together and existing shots for the film "just did n't look that great '' then. The shot used in the trailer was made with a background plate taken when filming the subway in Queens. The two trailers were viewed over 266 million times globally within a week.
On March 28, 2017, a second trailer debuted after screening at CinemaCon 2017 the night before. Shawn Robbins, chief analyst at BoxOffice.com, noted that the new Justice League trailer had received more Twitter mentions in that week but there was "clearer enthusiasm for Spider - Man. '' The Homecoming trailer was second for the week of March 20 -- 26 in new conversations (85,859) behind Justice League (201,267), according comScore 's PreAct service, which is "a tracking service utilizing social data to create context of the ever - evolving role of digital communication on feature films ''. An exclusive clip from the film was seen during the 2017 MTV Movie & TV Awards. On May 24, 2017, Sony and Marvel released a third domestic and international trailer. Ethan Anderton of / Film enjoyed both trailers, stating Homecoming "has the potential to be the best Spider - Man movie yet. Having the webslinger as part of the Marvel Cinematic Universe just feels right ''. TechCrunch 's Darrell Etherington agreed, saying, "You may have feelings about a tech - heavy Spider - Man suit or other aspects of this interpretation of the character, but it 's still shaping up to be better than any Spider - Man depicted in movies in recent memory. '' Ana Dumaraog for ScreenRant said the second trailer "arguably showed too much of the movie 's overarching narrative '', but the third "perfectly shows the right amount of new and old footage ''. She also appreciated the attention to detail that Watts and the writers put into the film, as highlighted by the trailers. Siddhant Adlakha of Birth. Movies. Death also felt the trailers were giving away too many details, but enjoyed them overall, especially the "vlogging '' aspect. Collider 's Dave Trombore expressed similar sentiments to Adlakha. After the trailers ' release, comScore and its PreAct service noted Homecoming was the top film for new social media conversations, that week and the week of May 29.
Alongside the release of the third trailers were domestic and international release posters. The domestic poster was criticized for its "floating head '' style, which offers "a chaotic mess of people looking in different directions, with little sense of what the film will deliver. '' Dan Auty for GameSpot called it a "star studded hot mess '', while Vanity Fair 's Katey Rich felt the poster was "too bogged down by the many different threads of the Marvel universe to highlight anything that 's made Spider - Man: Homecoming seem special so far. '' Adlakha felt the posters released for the film "have been alright thus far, but these ones probably tell general audiences to expect a very bloated movie. '' Adlakha was more positive of the international poster, which he felt was more "comicbook - y '' and "looks like it could be an actual scene from the film. '' Both Rich and Adlakha criticized the fact that Holland, Keaton and Downey appeared twice on the domestic poster, both in and out of costume. Sony partnered with ESPN CreativeWorks to create cross-promotional television ads for Homecoming and the 2017 NBA Finals, which were filmed by Watts. The ads were made to "weave in a highlight from the game just moments '' after it occurred. The promos see Holland, Downey Jr., and Favreau reprise their roles from the film, with cameo appearances from Stan Lee, DJ Khaled, Tim Duncan, Magic Johnson, and Cari Champion. Through June and July 2017, a Homecoming - inspired cafe opened in the Roppongi Hills complex in Tokyo, offering "arachnid - themed foods and drinks, including a Spider Curry, Spider Sense Latte and a sweet and refreshing Strawberry Spider Squash drink '', as well as a free, limited - edition sticker with any purchase.
For the week ending on June 11, comScore and its PreAct service noted that new social media conversations for the film were second only to Black Panther and its new trailer; Homecoming was then the number one film in the next two weeks. That month, Sony released a mobile app allowing users to "access '' Parker 's phone and "view his photos, videos, text messages, and hear voicemails from his friends ''. The app also provided an "AR Suit Explorer '' to learn more about the technology in the Spider - Man suit, and use photo filters, GIFs and stickers of the character. Sony and Dave & Buster 's also announced an arcade game based on the film, playable exclusively at Dave & Buster 's locations. A tie - in comic, Spider - Man: Homecoming Prelude, was released on June 20, collecting two prelude issues. On June 28, in partnership with Thinkmodo, a promotional prank was released in which Spider - Man (stuntman Chris Silcox) dropped from the ceiling in a coffee shop to scare customers; the video also featured a cameo appearance from Lee. Sony also partnered with the mobile app Holo to let users add 3D holograms of Spider - Man, with Holland 's voice and lines from the film, to real - world photos and videos. Before the end of June, Spider - Man: Homecoming -- Virtual Reality Experience was released on the PlayStation VR, Oculus Rift and HTC Vive for free, produced by Sony Pictures VR and developed by CreateVR. The virtual reality experience allows users to experience how it feels to be Spider - Man, with the ability to hit targets with his web shooters and face off against the Vulture. It was also available at select Cinemark Theatres in the United States and at the CineEurope trade show in Barcelona.
Ahead of the film 's release, for the week ending on July 2, the film was the top film for the third consecutive week for new social media conversations, according to comScore, which also noted that Spider - Man: Homecoming had produced a total of 2.67 million conversations to date. The film 's marketing campaign also included promotions with Audi and Dell (both also had product placement within the film), Pizza Hut, General Mills, Synchrony Bank, Movietickets.com, Goodwill, Baskin Robbins, Dunkin ' Donuts, Danone Waters, Panasonic Batteries, M&M 's, Mondelez, Asus, Bimbo, Jetstar, KEF, Kellogg 's, Lieferheld, Pepsico, Plus, Roady, Snickers, Sony Mobile, Oppo, Optus, and Doritos. As with the ESPN NBA Finals campaign, Watts directed a commercial for Dell 's marketing efforts, which earned 2.8 million views online. Goodwill hosted a build - your - own Spider - Man suit contest, with the winner attending the film 's premiere and receiving a co-branded Goodwill campaign focused on being a community hero. Overall, the campaign generated over $140 million in media value, greater than those for all previous Spider - Man films and Marvel Studios ' first 2017 release, Guardians of the Galaxy Vol. 2. This does not include merchandising for the film, which is controlled by Marvel and Disney and where they benefit the most from the deal to make the film for Sony. Marketing of the film in China for its release in the country included partnering with Momo, iQiyi, Tencent QQ, Baidu, Audi, Doritos, M&Ms, Dell, Mizone, CapitaLand, Xiaomi, HTC Vive and corporate parent Sony. To help target the teenage audience, Holland "recorded a high school entrance exam greeting '' while The Rap of China contestant PG One recorded a theme song.
Spider - Man: Homecoming was released on digital download by Sony Pictures Home Entertainment on September 26, 2017, and on Blu - ray, Blu - ray 3D, Ultra HD Blu - ray and DVD on October 17, 2017. The digital and Blu - ray releases include behind - the - scenes featurettes, deleted scenes, and a blooper reel.
The physical releases in its first week of sale were the top home media release, according to NPD VideoScan data. The Blu - ray version accounted for 79 % of the sales, with 13 % of total sales coming from the Ultra HD Blu - ray version.
As of October 29, 2017, Spider - Man: Homecoming has grossed $333.9 million in the United States and Canada, and $546 million in other territories, for a worldwide total of $879.9 million. The film had the second biggest global IMAX opening for a Sony film with $18 million. In May 2017, a survey from Fandango indicated that Homecoming was the second-most anticipated summer blockbuster behind Wonder Woman. By September 24, 2017, the film had earned $874.4 million worldwide, becoming the highest grossing superhero film of 2017, and the sixth largest film based on a Marvel character.
Spider - Man: Homecoming earned $50.9 million on its opening day in the United States and Canada (including $15.4 million from Thursday night previews), and had a total weekend gross of $117 million, the top film for the weekend. It was the second - highest opening for both a Spider - Man film and a Sony film, after Spider - Man 3 's $151.1 million debut in 2007. Early projections for the film from BoxOffice had it earning $135 million in its opening weekend, which was later adjusted to $125 million, and Deadline.com noting industry projections at anywhere between $90 -- 120 million. In its second weekend, the film fell to second behind War for the Planet of the Apes with $45.2 million, a 61 % decline in earnings, which was similar to the declines The Amazing Spider - Man 2 and Spider - Man 3 had in their second weekends. Additionally, Homecoming 's domestic gross reached $208.3 million, which surpassed the total domestic gross of The Amazing Spider - Man 2 ($202.9 million). The film fell to third in its third weekend. By July 26, Homecoming 's domestic gross reached $262.1 million, surpassing the total domestic gross of The Amazing Spider - Man ($262 million), leading to a fifth place finish for its fourth weekend. The next weekend, Homecoming finished sixth, and finished seventh the following five weekends. By September 3, 2017, the film had earned $325.1 million, surpassing the $325 million projected amount for its total domestic gross. In its eleventh weekend, Homecoming finished ninth.
Outside of the United States and Canada, Spider - Man: Homecoming earned $140.5 million its opening weekend from the 56 markets it opened in, with the film becoming number one in 50 of them. The $140.5 million was the highest opening ever for a Spider - Man film, factoring in the same number of markets and 2017 's exchange rates. South Korea had the highest Wednesday opening day gross, which contributed to a $25.4 million five - day opening in the country, the third highest opening ever for a Hollywood film. Brazil had the largest July opening day of all time, with $2 million, leading to an opening weekend total of $8.9 million. The $7 million earned from IMAX showings was the top opening of all time for a Sony film internationally. In its second weekend, the film opened in France at number one and number two in Germany. It earned an additional $11.9 million in South Korea, to bring its total in the country to $42.2 million. This made Homecoming the highest grossing Spider - Man film and the top grossing Hollywood film of 2017 in the country. Brazil contributed an additional $5.7 million, for a total of $19.4 million from the country, which was also the largest gross from a Spider - Man film. The film 's third weekend saw the Latin America region set a record as the highest - grossing Spider - Man film of all - time, with a region total of $77.4 million. Brazil remained the top grossing market for the region, with $25.7 million. In South Korea, the film became the 10th highest - grossing international release of all time. Homecoming opened at number one in Spain in its fourth weekend. In its sixth weekend, the film opened at number one in Japan, with its $770,000 from IMAX the fourth largest IMAX weekend for a Marvel film in the country. The film opened at number one in China on September 8, 2017, grossing $23 million on its opening day, including Thursday previews, making it the third biggest opening day for a Marvel Cinematic Universe film, behind Avengers: Age of Ultron and Captain America: Civil War, and the largest opening day gross for a Sony film in the country. The $70.8 million Homecoming earned in China for its opening weekend was the third highest opening behind Age of Ultron and Civil War, with $6 million from IMAX, which was the best IMAX opening weekend in September, and the best IMAX opening weekend for a Sony film. As of September 24, 2017, the film 's largest markets were China ($115.7 million), South Korea ($51.4 million), and the United Kingdom ($34.8 million).
The review aggregator website Rotten Tomatoes reported a 92 % rating based on 300 reviews, with an average rating of 7.6 / 10. The website 's critical consensus reads, "Spider - Man: Homecoming does whatever a second reboot can, delivering a colorful, fun adventure that fits snugly in the sprawling MCU without getting bogged down in franchise - building. '' Metacritic, which uses a weighted average, assigned a score of 73 out of 100, based on 51 critics, indicating "generally favorable reviews ''. Audiences polled by CinemaScore gave the film an average grade of "A '' on an A+ to F scale.
Owen Gleiberman of Variety said, "(T) he flying action has a casual flip buoyancy, and the movie does get you rooting for Peter. The appeal of this particular Spider - Boy is all too basic: In his lunge for valor, he keeps falling, and he keeps getting up. '' Mike Ryan of Uproxx praised the film 's light tone and performances, writing: "Spider - Man: Homecoming is the best Spider - Man movie to date. That does come with a caveat that Spider - Man: Homecoming and Spider - Man 2 are going for different things and both are great. But, tonally, I just love this incarnation of a Peter Parker who just loves being Spider - Man. '' The New York Times 's Manohla Dargis stated, "Mr. Holland looks and sounds more like a teen than the actors who 've previously suited up for this series, and he has fine support from a cast that includes Jacob Batalon as Peter 's best friend. Other good company includes Donald Glover, as a wrong - time, wrong - place criminal, and Martin Starr, who plays his teacher role with perfect deadpan timing. '' Richard Roeper of the Chicago Sun - Times wrote, "The best thing about Spider - Man: Homecoming is Spidey is still more of a kid than a man. Even with his budding superpowers, he still has the impatience, the awkwardness, the passion, the uncertainty and sometimes the dangerous ambition of a teenager still trying to figure out this world. '' Kenneth Turan of the Los Angeles Times gave the film a "mixed '' review, praising the stunt work and calling Michael Keaton 's performance as the Vulture "one of the strongest, most sympathetic villains of the entire series, '' but criticizing the direction by Jon Watts as "unevenly orchestrated ''.
Conversely, The Hollywood Reporter 's John DeFore found the film to be "occasionally exciting but often frustrating, '' and suggested it might have worked better "had Marvel Studios execs and a half - dozen screenwriters not worked so hard to integrate Peter Parker into their money - minting world. '' Robbie Collin of The Daily Telegraph said, "A little of the new Spider - Man went an exhilaratingly long way in Captain America: Civil War last year. But a lot of him goes almost nowhere in this slack and spiritless solo escapade. '' Mick LaSalle of the San Francisco Chronicle stated, "The movie breaks no new ground, and action sequences that were intended to be thrilling -- such as an epic battle on the Staten Island Ferry -- just sit there on the screen, incapable of stirring a single pulse, but content in their competence. ''
In June 2016, Rothman stated that Sony and Marvel were committed to making future Spider - Man films. By October 2016, discussions had begun for a second film, according to Holland, figuring out "who the villain is going to be and where we 're going '' in a potential sequel. In December 2016, after the successful release of the first Homecoming trailer, Sony slated a sequel to the film for July 5, 2019. Feige had stated that if additional films were made, an early idea Marvel had for them was to follow the model of the Harry Potter film series, having the plot of each film cover a new school year; the first sequel is intended to follow Parker 's junior year of high school, with a potential third film being set during his senior year. In June 2017, Feige and Pascal were both keen on having Watts return to direct the sequel, which is expected to start filming in April or May 2018. By the next month, Holland was confirmed to return, with Watts entering negotiations to return as director. Tomei has indicated a willingness to play Aunt May in future sequels. By the end of August, Chris McKenna and Erik Sommers were in final negotiations to write the screenplay.
|
when did canned food first reached the shops | Canning - wikipedia
Canning is a method of preserving food in which the food contents are processed and sealed in an airtight container. Canning provides a shelf life typically ranging from one to five years, although under specific circumstances it can be much longer. A freeze - dried canned product, such as canned dried lentils, could last as long as 30 years in an edible state. In 1974, samples of canned food from the wreck of the Bertrand, a steamboat that sank in the Missouri River in 1865, were tested by the National Food Processors Association. Although appearance, smell and vitamin content had deteriorated, there was no trace of microbial growth and the 109 - year - old food was determined to be still safe to eat.
During the first years of the Napoleonic Wars, the French government offered a hefty cash award of 12,000 francs to any inventor who could devise a cheap and effective method of preserving large amounts of food. The larger armies of the period required increased and regular supplies of quality food. Limited food availability was among the factors limiting military campaigns to the summer and autumn months. In 1809, Nicolas Appert, a French confectioner and brewer, observed that food cooked inside a jar did not spoil unless the seals leaked, and developed a method of sealing food in glass jars. Appert was awarded the prize in 1810 by Count Montelivert, a French minister of the interior. The reason for lack of spoilage was unknown at the time, since it would be another 50 years before Louis Pasteur demonstrated the role of microbes in food spoilage.
The French Army began experimenting with issuing canned foods to its soldiers, but the slow process of canning foods and the even slower development and transport stages prevented the army from shipping large amounts across the French Empire, and the war ended before the process was perfected. Unfortunately for Appert, the factory which he had built with his prize money was razed in 1814 by Allied soldiers when they entered France.
Following the end of the Napoleonic Wars, the canning process was gradually employed in other European countries and in the US.
Based on Appert 's methods of food preservation, the tin can process was allegedly developed by Frenchman Philippe de Girard, who came to London and used British merchant Peter Durand as an agent to patent his own idea in 1810. Durand did not pursue food canning himself, selling his patent in 1811 to Bryan Donkin and John Hall, who were in business as Donkin Hall and Gamble, of Bermondsey. Bryan Donkin developed the process of packaging food in sealed airtight cans, made of tinned wrought iron. Initially, the canning process was slow and labour - intensive, as each large can had to be hand - made, and took up to six hours to cook, making canned food too expensive for ordinary people.
The main market for the food at this stage was the British Army and Royal Navy. By 1817 Donkin recorded that he had sold £ 3000 worth of canned meat in six months. In 1824 Sir William Edward Parry took canned beef and pea soup with him on his voyage to the Arctic in HMS Fury, during his search for a northwestern passage to India. In 1829, Admiral Sir James Ross also took canned food to the Arctic, as did Sir John Franklin in 1845. Some of his stores were found by the search expedition led by Captain (later Admiral Sir) Leopold McLintock in 1857. One of these cans was opened in 1939, and was edible and nutritious, though it was not analysed for contamination by the lead solder used in its manufacture.
During the mid-19th century, canned food became a status symbol amongst middle - class households in Europe, being something of a frivolous novelty. Early methods of manufacture employed poisonous lead solder for sealing the cans, which may have worsened the disastrous outcome of the 1845 Franklin expedition to chart and navigate the Northwest Passage.
Increasing mechanization of the canning process, coupled with a huge increase in urban populations across Europe, resulted in a rising demand for canned food. A number of inventions and improvements followed, and by the 1860s smaller machine - made steel cans were possible, and the time to cook food in sealed cans had been reduced from around six hours to thirty minutes.
Canned food also began to spread beyond Europe -- Robert Ayars established the first American canning factory in New York City in 1812, using improved tin - plated wrought - iron cans for preserving oysters, meats, fruits and vegetables. Demand for canned food greatly increased during wars. Large - scale wars in the nineteenth century, such as the Crimean War, American Civil War, and Franco - Prussian War introduced increasing numbers of working - class men to canned food, and allowed canning companies to expand their businesses to meet military demands for non-perishable food, allowing companies to manufacture in bulk and sell to wider civilian markets after wars ended. Urban populations in Victorian Britain demanded ever - increasing quantities of cheap, varied, quality food that they could keep at home without having to go shopping daily. In response, companies such as Underwood, Nestlé, Heinz, and others provided quality canned food for sale to working class city - dwellers. In particular, Crosse and Blackwell took over the concern of Donkin Hall and Gamble. The late 19th century saw the range of canned food available to urban populations greatly increase, as canners competed with each other using novel foodstuffs, highly decorated printed labels, and lower prices.
Demand for canned food skyrocketed during World War I, as military commanders sought vast quantities of cheap, high - calorie food to feed their millions of soldiers, which could be transported safely, survive trench conditions, and not spoil in transport. Throughout the war, soldiers generally subsisted on low - quality canned foodstuffs, such as the British "Bully Beef '' (cheap corned beef), pork and beans, canned sausages, and Maconochies Irish Stew, but by 1916, widespread boredom with cheap canned food amongst soldiers resulted in militaries purchasing better - quality food to improve morale and the complete meals in a can began to appear. In 1917, the French Army began issuing canned French cuisine, such as coq au vin, Beef Bourguignon and Vichyssoise while the Italian Army experimented with canned ravioli, spaghetti bolognese, Minestrone and Pasta e fagioli. Shortages of canned food in the British Army in 1917 led to the government issuing cigarettes and amphetamines to soldiers to suppress their appetites. After the war, companies that had supplied military canned food improved the quality of their goods for civilian sale.
The original fragile and heavy glass containers presented challenges for transportation, and glass jars were largely replaced in commercial canneries with cylindrical tin or wrought - iron canisters (later shortened to "cans '') following the work of Peter Durand (1810). Cans are cheaper and quicker to make, and much less fragile than glass jars. Glass jars have remained popular for some high - value products and in home canning. Can openers were not invented for another thirty years -- at first, soldiers had to cut the cans open with bayonets or smash them open with rocks. Today, tin - coated steel is the material most commonly used. Laminate vacuum pouches are also used for canning, such as used in MREs and Capri Sun drinks.
To prevent the food from being spoiled before and during containment, a number of methods are used: pasteurisation, boiling (and other applications of high temperature over a period of time), refrigeration, freezing, drying, vacuum treatment, antimicrobial agents that are natural to the recipe of the foods being preserved, a sufficient dose of ionizing radiation, submersion in a strong saline solution, acid, base, osmotically extreme (for example very sugary) or other microbially - challenging environments.
Other than sterilization, no method is perfectly dependable as a preservative. For example, the microorganism Clostridium botulinum (which causes botulism) can only be eliminated at temperatures above the boiling point of water.
From a public safety point of view, foods with low acidity (a pH more than 4.6) need sterilization under high temperature (116 -- 130 ° C). To achieve temperatures above the boiling point requires the use of a pressure canner. Foods that must be pressure canned include most vegetables, meat, seafood, poultry, and dairy products. The only foods that may be safely canned in an ordinary boiling water bath are highly acidic ones with a pH below 4.6, such as fruits, pickled vegetables, or other foods to which acidic additives have been added.
Invented in 1888 by Max Ams, modern double seams provide an airtight seal to the tin can. This airtight nature is crucial to keeping micro-organisms out of the can and keeping its contents sealed inside. Thus, double seamed cans are also known as Sanitary Cans. Developed in 1900 in Europe, this sort of can was made of the traditional cylindrical body made with tin plate. The two ends (lids) were attached using what is now called a double seam. A can thus sealed is impervious to contamination by creating two tight continuous folds between the can 's cylindrical body and the lids. This eliminated the need for solder and allowed improvements in manufacturing speed, reducing cost.
Double seaming uses rollers to shape the can, lid and the final double seam. To make a sanitary can and lid suitable for double seaming, manufacture begins with a sheet of coated tin plate. To create the can body, rectangles are cut and curled around a die, and welded together creating a cylinder with a side seam.
Rollers are then used to flare out one or both ends of the cylinder to create a quarter circle flange around the circumference. Precision is required to ensure that the welded sides are perfectly aligned, as any misalignment will cause inconsistent flange shape, compromising its integrity.
A circle is then cut from the sheet using a die cutter. The circle is shaped in a stamping press to create a downward countersink to fit snugly into the can body. The result can be compared to an upside down and very flat top hat. The outer edge is then curled down and around about 140 degrees using rollers to create the end curl.
The result is a steel tube with a flanged edge, and a countersunk steel disc with a curled edge. A rubber compound is put inside the curl.
The body and end are brought together in a seamer and held in place by the base plate and chuck, respectively. The base plate provides a sure footing for the can body during the seaming operation and the chuck fits snugly into the end (lid). The result is the countersink of the end sits inside the top of the can body just below the flange. The end curl protrudes slightly beyond the flange.
Once brought together in the seamer, the seaming head presses a first operation roller against the end curl. The end curl is pressed against the flange curling it in toward the body and under the flange. The flange is also bent downward, and the end and body are now loosely joined together. The first operation roller is then retracted. At this point five thicknesses of steel exist in the seam. From the outside in they are:
The seaming head then engages the second operation roller against the partly formed seam. The second operation presses all five steel components together tightly to form the final seal. The five layers in the final seam are then called; a) End, b) Body Hook, c) Cover Hook, d) Body, e) Countersink. All sanitary cans require a filling medium within the seam because otherwise the metal - to - metal contact will not maintain a hermetic seal. In most cases, a rubberized compound is placed inside the end curl radius, forming the critical seal between the end and the body.
Probably the most important innovation since the introduction of double seams is the welded side seam. Prior to the welded side seam, the can body was folded and / or soldered together, leaving a relatively thick side seam. The thick side seam required that the side seam end juncture at the end curl to have more metal to curl around before closing in behind the Body Hook or flange, with a greater opportunity for error.
Many different parts during the seaming process are critical in ensuring that a can is airtight and vacuum sealed. The dangers of a can that is not hermetically sealed are contamination by foreign objects (bacteria or fungicide sprays), or that the can could leak or spoil.
One important part is the seamer setup. This process is usually performed by an experienced technician. Amongst the parts that need setup are seamer rolls and chucks which have to be set in their exact position (using a feeler gauge or a clearance gauge). The lifter pressure and position, roll and chuck designs, tooling wear, and bearing wear all contribute to a good double seam.
Incorrect setups can be non-intuitive. For example, due to the springback effect, a seam can appear loose, when in reality it was closed too tight and has opened up like a spring. For this reason, experienced operators and good seamer setup are critical to ensure that double seams are properly closed.
Quality control usually involves taking full cans from the line -- one per seamer head, at least once or twice per shift, and performing a teardown operation (wrinkle / tightness), mechanical tests (external thickness, seamer length / height and countersink) as well as cutting the seam open with a twin blade saw and measuring with a double seam inspection system. The combination of these measurements will determine the seam 's quality.
Use of a statistical process control (SPC) software in conjunction with a manual double - seam monitor, computerized double seam scanner, or even a fully automatic double seam inspection system makes the laborious process of double seam inspection faster and much more accurate. Statistically tracking the performance of each head or seaming station of the can seamer allows for better prediction of can seamer issues, and may be used to plan maintenance when convenient, rather than to simply react after bad or unsafe cans have been produced.
Canning is a way of processing food to extend its shelf life. The idea is to make food available and edible long after the processing time. A 1997 study found that canned fruits and vegetables are as rich with dietary fiber and vitamins as the same corresponding fresh or frozen foods, and in some cases the canned products are richer than their fresh or frozen counterparts. The heating process during canning appears to make dietary fiber more soluble, and therefore more readily fermented in the colon into gases and physiologically active byproducts. Canned tomatoes have a higher available lycopene content. Consequently, canned meat and vegetables are often among the list of food items that are stocked during emergencies. In 2013, the Can Manufacturers Institute launched the Cans Get You Cooking Campaign with the support of Crown Holdings, Inc., Ball Corporation, and Silgan Containers. The goal of the campaign is to get consumers to use more canned goods in their daily meals.
In the beginning of the 19th century the process of canning foods was mainly done by small canneries. These canneries were full of overlooked sanitation problems, such as poor hygiene and unsanitary work environments. Since the refrigerator did not exist and industrial canning standards were not set in place it was very common for contaminated cans to slip onto the grocery store shelves.
In canning toxicology, migration is the movement of substances from the can itself into the contents. Potential toxic substances that can migrate are lead, causing lead poisoning, or bisphenol A, a potential endocrine disruptor that is an ingredient in the epoxy commonly used to coat the inner surface of cans. Some cans are manufactured with a BPA - free enamel lining produced from plant oils and resins.
Salt (sodium chloride), dissolved in water, is used in the canning process. As a result, canned food can be a major source of dietary salt. Too much salt increases the risk of health problems, including high blood pressure. Therefore, health authorities have recommended limitations of dietary sodium. Many canned products are available in low - salt and no - salt alternatives.
Rinsing thoroughly after opening may reduce the amount of salt in canned foods, since much of the salt content is thought to be in the liquid, rather than the food itself.
Foodborne botulism results from contaminated foodstuffs in which C. botulinum spores have been allowed to germinate and produce botulism toxin, and this typically occurs in canned non-acidic food substances that have not received a strong enough thermal heat treatment. C. botulinum prefers low oxygen environments and is a poor competitor to other bacteria, but its spores are resistant to thermal treatments. When a canned food is sterilized insufficiently, most other bacteria besides the C. botulinum spores are killed, and the spores can germinate and produce botulism toxin. Botulism is a rare but serious paralytic illness, leading to paralysis that typically starts with the muscles of the face and then spreads towards the limbs. In severe forms, it leads to paralysis of the breathing muscles and causes respiratory failure. In view of this life - threatening complication, all suspected cases of botulism are treated as medical emergencies, and public health officials are usually involved to prevent further cases from the same source.
Canned goods and canning supplies sell particularly well in times of recession due to the tendency of financially stressed individuals to engage in cocooning, a term used by retail analysts to describe the phenomenon in which people choose to stay at home instead of adding expenditures to their budget by dining out and socializing outside the home.
In February 2009 during a recession, the United States saw an 11.5 % rise in sales of canning - related items.
Some communities in the US have county canning centers which are available for teaching canning, or shared community kitchens which can be rented for canning one 's own foods.
There was a 1956 US documentary The Miracle of the Can that mentions the Pea Tenderomiter for the canning trade.
Oranges
|
why did northerners oppose the annexation of texas | Texas annexation - wikipedia
The Texas annexation was the 1845 incorporation of the Republic of Texas into the United States of America, which was admitted to the Union as the 28th state on December 29, 1845.
The Republic of Texas declared independence from the Republic of Mexico on March 2, 1836. At the time the vast majority of the Texian population favored the annexation of the Republic by the United States. The leadership of both major U.S. political parties, the Democrats and the Whigs, opposed the introduction of Texas, a vast slave - holding region, into the volatile political climate of the pro - and anti-slavery sectional controversies in Congress. Moreover, they wished to avoid a war with Mexico, whose government refused to acknowledge the sovereignty of its rebellious northern province. With Texas 's economic fortunes declining by the early 1840s, the President of the Texas Republic, Sam Houston, arranged talks with Mexico to explore the possibility of securing official recognition of independence, with Great Britain mediating.
In 1843, U.S. President John Tyler, unaligned with any political party, decided independently to pursue the annexation of Texas in a bid to gain a base of popular support for another four years in office. His official motivation was to outmaneuver suspected diplomatic efforts by the British government for emancipation of slaves in Texas, which would undermine slavery in the United States. Through secret negotiations with the Houston administration, Tyler secured a treaty of annexation in April 1844. When the documents were submitted to the US Senate for ratification, the details of the terms of annexation became public and the question of acquiring Texas took center stage in the presidential election of 1844. Pro-Texas - annexation southern Democratic delegates denied their anti-annexation leader Martin Van Buren the nomination at their party 's convention in May 1844. In alliance with pro-expansion northern Democratic colleagues, they secured the nomination of James K. Polk, who ran on a pro-Texas Manifest Destiny platform.
In June 1844, the Senate, with its Whig majority, soundly rejected the Tyler -- Texas treaty. The pro-annexation Democrat Polk narrowly defeated anti-annexation Whig Henry Clay in the 1844 presidential election. In December 1844, lame - duck President Tyler called on Congress to pass his treaty by simple majorities in each house. The Democratic - dominated House of Representatives complied with his request by passing an amended bill expanding on the pro-slavery provisions of the Tyler treaty. The Senate narrowly passed a compromise version of the House bill (by the vote of the minority Democrats and several southern Whigs), designed to provide the incoming President - elect Polk the options of immediate annexation of Texas or new talks to revise the annexation terms of the House - amended bill.
On March 1, 1845, President Tyler signed the annexation bill, and on March 3 (his last day in office), he forwarded the House version to Texas, offering immediate annexation (which preempted Polk). When Polk took office the next day, he encouraged Texas to accept the Tyler offer. Texas ratified the agreement with popular approval from Texans. The bill was signed by Polk on December 29, 1845, accepting Texas as the 28th state of the Union. Texas formally relinquished its sovereignty to the United States on February 19, 1846.
First mapped by Spain in 1519, Texas was part of the vast Spanish empire seized by the Spanish Conquistadors from its indigenous people for over 300 years. When the Louisiana territory was acquired by the United States from France in 1803, many in the U.S. believed the new territory included parts or all of present - day Texas. The US - Spain border along the northern frontier of Texas took shape in the 1817 -- 1819 negotiations between Secretary of State John Quincy Adams and the Spanish ambassador to the United States, Luis de Onís y González - Vara. The boundaries of Texas were determined within the larger geostrategic struggle to demarcate the limits of the United States ' extensive western lands and of Spain 's vast possessions in North America. The Florida Treaty of February 22, 1819 emerged as a compromise that excluded Spain from the lower Columbia River watershed, but established southern boundaries at the Sabine and Red Rivers, "legally extinguish (ing) '' any American claims to Texas. Nonetheless, Texas remained an object of fervent interest to American expansionists, among them Thomas Jefferson, who anticipated the eventual acquisition of its fertile lands.
The Missouri crisis of 1819 -- 1821 sharpened commitments to expansionism among the country 's slaveholding interests, when the so - called Thomas proviso established the 36 ° 30 ' parallel, imposing free - soil and slave - soil futures in the Louisiana Purchase lands. While a majority of southern congressmen acquiesced to the exclusion of slavery from the bulk of the Louisiana Purchase, a significant minority objected. Virginian editor Thomas Ritchie of the Richmond Enquirer predicted that with the proviso restrictions, the South would ultimately require Texas: "If we are cooped up on the north, we must have elbow room to the west. '' Representative John Floyd of Virginia in 1824 accused Secretary of State Adams of conceding Texas to Spain in 1819 in the interests of Northern anti-slavery advocates, and so depriving the South of additional slave states. Then - Representative John Tyler of Virginia invoked the Jeffersonian precepts of territorial and commercial growth as a national goal to counter the rise of sectional differences over slavery. His "diffusion '' theory declared that with Missouri open to slavery, the new state would encourage the transfer of underutilized slaves westward, emptying the eastern states of bondsmen and making emancipation feasible in the old South. This doctrine would be revived during the Texas annexation controversy.
When Mexico won its independence from Spain in 1821, the United States did not contest the new republic 's claims to Texas, and both presidents John Quincy Adams (1825 -- 1829) and Andrew Jackson (1829 -- 1837) persistently sought, through official and unofficial channels, to procure all or portions of provincial Texas from the Mexican government, without success.
Spanish and Indigenous immigrants, primarily from North Eastern provinces of New Spain began to settle Texas in the late 17th century. The Spanish constructed chains of missions and presidios in what is today Louisiana, East Texas and South Texas. The first chain of missions was designed for the Tejas Indians, near Los Adaes. Soon thereafter, the San Antonio Missions were founded along the San Antonio River. The City of San Antonio, then known as San Fernando de Bexar, was founded in 1719. In the early 1760s, Jose de Escandon created five settlements along the Rio Grande River, including Laredo.
Anglo - American immigrants, primarily from the Southern United States, began emigrating to Mexican Texas in the early 1820s at the invitation of the Texas faction of the Coahuila y Texas state government, which sought to populate the sparsely inhabited lands of its northern frontier for cotton production. Colonizing empresario Stephen F. Austin managed the regional affairs of the mostly American - born population -- 20 % of them slaves -- under the terms of the generous government land grants. Mexican authorities were initially content to govern the remote province through salutary neglect, "permitting slavery under the legal fiction of ' permanent indentured servitude ', similar to Mexico 's peonage system.
A general lawlessness prevailed in the vast Texas frontier, and Mexico 's civic laws went largely unenforced among the Anglo - American settlers. In particular, the prohibitions against slavery and forced labor were ignored. The requirement that all settlers be Catholic or convert to Catholicism was also subverted. Mexican authorities, perceiving that they were losing control over Texas and alarmed by the unsuccessful Fredonian Rebellion of 1826, abandoned the policy of benign rule. New restrictions were imposed in 1829 -- 1830, outlawing slavery throughout the nation and terminating further American immigration to Texas. Military occupation followed, sparking local uprisings and a civil war. Texas conventions in 1832 and 1833 submitted petitions for redress of grievances to overturn the restrictions, with limited success. In 1835, an army under Mexican President Santa Anna entered its territory of Texas and abolished self - government. Texans responded by declaring their independence from Mexico on March 2, 1836. On April 20 -- 21, rebel forces under Texas General Sam Houston defeated the Mexican army at the Battle of San Jacinto. In June 1836, Santa Anna agreed to Texas independence, but the Mexican government refused to honor Santa Anna 's pledge. Texans, now de facto independent, recognized that their security and prosperity could never be achieved while Mexico denied the legitimacy of their revolution.
In the years following independence, the migration of white settlers and importation of black slave labor into the vast republic was deterred by Texas 's unresolved international status and the threat of renewed warfare with Mexico. American citizens who considered migrating to the new republic perceived that "life and property were safer within the United States '' than in an independent Texas. The situation led to labor shortages, reduced tax revenue, large national debts and a diminished Texas militia.
The Anglo - American immigrants residing in newly - independent Texas overwhelmingly desired immediate annexation by the United States. But, despite his strong support for Texas independence from Mexico, then - President Andrew Jackson delayed recognizing the new republic until the last day of his presidency to avoid raising the issue during the 1836 general election. Jackson 's political caution was informed by northern concerns that Texas could potentially form several new slave states and undermine the North - South balance in Congress.
Jackson 's successor, President Martin Van Buren, viewed Texas annexation as an immense political liability that would empower the anti-slavery northern Whig opposition -- especially if annexation provoked a war with Mexico. Presented with a formal annexation proposal from Texas minister Memucan Hunt, Jr. in August 1837, Van Buren summarily rejected it. Annexation resolutions presented separately in each house of Congress were either soundly defeated or tabled through filibuster. After the election of 1838, new Texas president Mirabeau B. Lamar withdrew his republic 's offer of annexation due to these failures. Texans were at an annexation impasse when John Tyler entered the White House in 1841.
William Henry Harrison, Whig Party presidential nominee, defeated US President Martin Van Buren in the 1840 general election. Upon Harrison 's death shortly after his inauguration, Vice-President John Tyler assumed the presidency. President Tyler was expelled from the Whig party in 1841 for repeatedly vetoing their domestic finance legislation. Tyler, isolated and outside the two - party mainstream, turned to foreign affairs to salvage his presidency, aligning himself with a southern states ' rights faction that shared his fervent slavery expansionist views.
In his first address to Congress in special session on June 1, 1841, Tyler set the stage for Texas annexation by announcing his intention to pursue an expansionist agenda so as to preserve the balance between state and national authority and to protect American institutions, including slavery, so as to avoid sectional conflict. Tyler 's closest advisors counseled him that obtaining Texas would assure him a second term in the White House, and it became a deeply personal obsession for the president, who viewed the acquisition of Texas as the "primary objective of his administration ''. Tyler delayed direct action on Texas to work closely with his Secretary of State Daniel Webster on other pressing diplomatic initiatives.
With the Webster - Ashburton Treaty ratified in 1843, Tyler was ready to make the annexation of Texas his "top priority ''. Representative Thomas W. Gilmer of Virginia was authorized by the administration to make the case for annexation to the American electorate. In a widely circulated open letter, understood as an announcement of the executive branch 's designs for Texas, Gilmer described Texas as a panacea for North - South conflict and an economic boon to all commercial interests. The slavery issue, however divisive, would be left for the states to decide as per the US Constitution. Domestic tranquility and national security, Tyler argued, would result from an annexed Texas; a Texas left outside American jurisdiction would imperil the Union. Tyler adroitly arranged the resignation of his anti-annexation Secretary of State Daniel Webster, and on June 23, 1843 appointed Abel P. Upshur, a Virginia states ' rights champion and ardent proponent of Texas annexation. This cabinet shift signaled Tyler 's intent to pursue Texas annexation aggressively.
In late September 1843, in an effort to cultivate public support for Texas, Secretary Upshur dispatched a letter to the US Minister to Great Britain, Edward Everett, conveying his displeasure with Britain 's global anti-slavery posture, and warning their government that forays into Texas 's affairs would be regarded as "tantamount to direct interference ' with the established institutions of the United States ' ''. In a breach of diplomatic norms, Upshur leaked the communique to the press to inflame popular Anglophobic sentiments among American citizens.
In the spring of 1843, the Tyler administration had sent executive agent Duff Green to Europe to gather intelligence and arrange territorial treaty talks with Great Britain regarding Oregon; he also worked with American minister to France, Lewis Cass, to thwart efforts by major European powers to suppress the maritime slave trade. Green reported to Secretary Upshur in July 1843 that he had discovered a "loan plot '' by American abolitionists, in league with Lord Aberdeen, British Foreign Secretary, to provide funds to the Texas in exchange for the emancipation of its slaves. Minister Everett was charged with determining the substance of these confidential reports alleging a Texas plot. His investigations, including personal interviews with Lord Aberdeen, concluded that British interest in abolitionist intrigues was weak, contradicting Secretary of State Upshur 's conviction that Great Britain was manipulating Texas. Though unsubstantiated, Green 's unofficial intelligence so alarmed Tyler that he requested verification from the US minister to Mexico, Waddy Thompson.
John C. Calhoun of South Carolina, a pro-slavery extremist counseled Secretary Upshur that British designs on American slavery were real and required immediate action to preempt a takeover of Texas by Great Britain. When Tyler confirmed in September that the British Foreign Secretary Aberdeen had encouraged détente between Mexico and Texas, allegedly pressing Mexico to maneuver Texas towards emancipation of its slaves, Tyler acted at once. On September 18, 1843, in consultation with Secretary Upshur, he ordered secret talks opened with Texas Minister to the United States Isaac Van Zandt to negotiate the annexation of Texas. Face - to - face negotiations commenced on October 16, 1843.
By the summer of 1843 Sam Houston 's Texas administration had returned to negotiations with the Mexican government to consider a rapprochement that would permit Texas self - governance, possibly as a state of Mexico, with Great Britain acting as mediator. Texas officials felt compelled by the fact that the Tyler administration appeared unequipped to mount an effective campaign for Texas annexation. With the 1844 general election in the United States approaching, the leadership in both the Democratic and Whig parties remained unequivocally anti-Texas. Texas - Mexico treaty options under consideration included an autonomous Texas within Mexico 's borders, or an independent republic with the provision that Texas should emancipate its slaves upon recognition.
Van Zandt, though he personally favored annexation by the United States, was not authorized to entertain any overtures from the US government on the subject. Texas officials were at the moment deeply engaged in exploring settlements with Mexican diplomats, facilitated by Great Britain. Texas 's predominant concern was not British interference with the institution of slavery -- English diplomats had not alluded to the issue -- but the avoidance of any resumption of hostilities with Mexico. Still, US Secretary of State Upshur vigorously courted Texas diplomats to begin annexation talks, finally dispatching an appeal to President Sam Houston in January 1845. In it, he assured Houston that, in contrast to previous attempts, the political climate in the United States, including sections of the North, was amenable to Texas statehood, and that a two - thirds majority in Senate could be obtained to ratify a Texas treaty.
Texans were hesitant to pursue a US - Texas treaty without a written commitment of military defense from America, since a full - scale military attack by Mexico seemed likely when the negotiations became public. If ratification of the annexation measure stalled in the US Senate, Texas could face a war alone against Mexico. Because only Congress could declare war, the Tyler administration lacked the constitutional authority to commit the US to support of Texas. But when Secretary Upshur provided a verbal assurance of military defense, President Houston, responding to urgent calls for annexation from the Texas Congress of December 1843, authorized the reopening of annexation negotiations.
As Secretary Upshur accelerated the secret treaty discussions, Mexican diplomats learned that US - Texas talks were taking place. Mexican minister to the U.S. Juan Almonte confronted Upshur with these reports, warning him that if Congress sanctioned a treaty of annexation, Mexico would break diplomatic ties and immediately declare war. Secretary Upshur evaded and dismissed the charges, and pressed forward with the negotiations. In tandem with moving forward with Texas diplomats, Upshur was secretly lobbying US Senators to support annexation, providing lawmakers with persuasive arguments linking Texas acquisition to national security and domestic peace. By early 1844, Upshur was able to assure Texas officials that 40 of the 52 members of the Senate were pledged to ratify the Tyler - Texas treaty, more than the two - thirds majority required for passage. Tyler, in his annual address to Congress in December 1843, maintained his silence on the secret treaty, so as not to damage relations with the wary Texas diplomats. Throughout, Tyler did his utmost to keep the negotiations secret, making no public reference to his administration 's single - minded quest for Texas.
The Tyler - Texas treaty was in its final stages when its chief architects, Secretary Upshur and Secretary of the Navy Thomas W. Gilmer, died in an accident aboard USS Princeton on February 28, 1844, just a day after achieving a preliminary treaty draft agreement with the Texas Republic. The Princeton disaster proved a major setback for Texas annexation, in that Tyler expected Secretary Upshur to elicit critical support from Whig and Democratic Senators during the upcoming treaty ratification process. Tyler selected John C. Calhoun to replace Upshur as Secretary of State and to finalize the treaty with Texas. The choice of Calhoun, a highly regarded but controversial American statesman, risked introducing a politically polarizing element into the Texas debates, but Tyler prized him as a strong advocate of annexation.
With the Tyler - Upshur secret annexation negotiations with Texas near consummation, Senator Robert J. Walker of Mississippi, a key Tyler ally, issued a widely distributed and highly influential letter, reproduced as a pamphlet, making the case for immediate annexation. In it, Walker argued that Texas could be acquired by Congress in a number of ways -- all constitutional -- and that the moral authority to do so was based on the precepts for territorial expansion established by Jefferson and Madison, and promulgated as doctrine by Monroe in 1823. Senator Walker 's polemic offered analysis on the significance of Texas with respect to slavery and race. He envisioned Texas as a corridor through which both free and enslaved African - Americans could be "diffused '' southward in a gradual exodus that would ultimately supply labor to the Central American tropics, and in time, empty the United States of its slave population.
This "safety - valve '' theory "appealed to the racial fears of northern whites '' who dreaded the prospect of absorbing emancipated slaves into their communities in the event that the institution of slavery collapsed in the South. This scheme for racial cleansing was consistent, on a pragmatic level, with proposals for overseas colonization of blacks, which were pursued by a number of American presidents, from Jefferson to Lincoln. Walker bolstered his position by raising national security concerns, warning that in the event annexation failed, imperialist Great Britain would maneuver the Republic of Texas into emancipating its slaves, forecasting a dangerous destabilizing influence on southwestern slaveholding states. The pamphlet characterized abolitionists as traitors who conspired with the British to overthrow the United States.
A variation of the Tyler 's "diffusion '' theory, it played on economic fears in a period when slave - based staple crop markets had not yet recovered from the Panic of 1837. The Texas "escape route '' conceived by Walker promised to increase demand for slaves in fertile cotton - growing regions of Texas, as well as the monetary value of slaves. Cash - poor plantation owners in the older eastern South were promised a market for surplus slaves at a profit. Texas annexation, wrote Walker, would eliminate all these dangers and "fortify the whole Union. ''
Walker 's pamphlet brought forth strident demands for Texas from pro-slavery expansionists in the South; in the North, it allowed anti-slavery expansionists to embrace Texas without appearing to be aligned with pro-slavery extremists. His assumptions and analysis "shaped and framed the debates on annexation but his premises went largely unchallenged among the press and public.
The Tyler - Texas treaty, signed on April 12, 1844, was framed to induct Texas into the Union as a territory, following constitutional protocols. To wit, Texas would cede all its public lands to the United States, and the federal government would assume all its bonded debt, up to $10 million. The boundaries of the Texas territory were left unspecified. Four new states could ultimately be carved from the former republic -- three of them likely to become slave states. Any allusion to slavery was omitted from the document so as not to antagonize anti-slavery sentiments during Senate debates, but it provided for the "preservation of all (Texas) property as secured in our domestic institutions. ''
Upon the signing of the treaty, Tyler complied with the Texans ' demand for military and naval protection, deploying troops to Fort Jesup in Louisiana and a fleet of warships to the Gulf of Mexico. In the event that the Senate failed to pass the treaty, Tyler promised the Texas diplomats that he would officially exhort both houses of Congress to establish Texas as a state of the Union upon provisions authorized in the Constitution. Tyler 's cabinet was split on the administration 's handling of the Texas agreement. Secretary of War William Wilkins praised the terms of annexation publicly, touting the economic and geostrategic benefits with relation to Great Britain. Secretary of the Treasury John C. Spencer was alarmed at the constitutional implications of Tyler 's application of military force without congressional approval, a violation of the separation of powers. Refusing to transfer contingency funds for the naval mobilization, he resigned.
Tyler submitted his treaty for annexation to the Senate, delivered April 22, 1844, where a two - thirds majority was required for ratification. Secretary of State Calhoun (assuming his post March 29, 1844) had sent a letter to British minister Richard Packenham denouncing British anti-slavery interference in Texas. He included the Packenham Letter with the Tyler bill, intending to create a sense of crisis in Southern Democrats. In it, he characterized slavery as a social blessing and the acquisition of Texas as an emergency measure necessary to safeguard the "peculiar institution '' in the United States. In doing so, Tyler and Calhoun sought to unite the South in a crusade that would present the North with an ultimatum: support Texas annexation or lose the South.
President Tyler expected that his treaty would be debated secretly in Senate executive session. However, less than a week after debates opened, the treaty, its associated internal correspondence, and the Packenham letter were leaked to the public. The nature of the Tyler - Texas negotiations caused a national outcry, in that "the documents appeared to verify that the sole objective of Texas annexation was the preservation of slavery. '' A mobilization of anti-annexation forces in the North strengthened both major parties ' hostility toward Tyler 's agenda. The leading presidential hopefuls of both parties, Democrat Martin Van Buren and Whig Henry Clay, publicly denounced the treaty. Texas annexation and the reoccupation of Oregon territory emerged as the central issues in the 1844 general election.
In response, Tyler, already ejected from the Whig party, quickly began to organize a third party in hopes of inducing the Democrats to embrace a pro-expansionist platform. By running as a third - party candidate, Tyler threatened to siphon off pro-annexation Democratic voters; Democratic party disunity would mean the election of Henry Clay, a staunchly anti-Texas Whig. Pro-annexation delegates among southern Democrats, with assistance from a number of northern delegates, blocked anti-expansion candidate Martin Van Buren at the convention, which instead nominated the pro-expansion champion of Manifest Destiny, James K. Polk of Tennessee. Polk unified his party under the banner of Texas and Oregon acquisition.
In August 1844, in the midst of the campaign, Tyler withdrew from the race. The Democratic Party was by then unequivocally committed to Texas annexation, and Tyler, assured by Polk 's envoys that as President he would effect Texas annexation, urged his supporters to vote Democratic. Polk narrowly defeated Whig Henry Clay in the November election. The victorious Democrats were poised to acquire Texas under President - elect Polk 's doctrine of Manifest Destiny, rather than on the pro-slavery agenda of Tyler and Calhoun.
As a treaty document with a foreign nation, the Tyler - Texas annexation treaty required the support of a two - thirds majority in the Senate for passage. But in fact, when the Senate voted on the measure on June 8, 1844, fully two - thirds voted against the treaty (16 -- 35). The vote went largely along party lines: Whigs had opposed it almost unanimously (1 -- 27), while Democrats split, but voted overwhelmingly in favor (15 -- 8). The election campaign had hardened partisan positions on Texas among Democrats. Tyler had anticipated that the measure would fail, due largely to the divisive effects of Secretary Calhoun 's Packenham letter. Undeterred, he formally asked the House of Representatives to consider other constitutional means to authorize passage of the treaty. Congress adjourned before debating the matter.
The same Senate that had rejected the Tyler -- Calhoun treaty by a margin of 2: 1 in June 1844 reassembled in December 1844 in a short lame - duck session. (Though pro-annexation Democrats had made gains in the fall elections, those legislators -- the 29th Congress -- would not assume office until March 1845.) Lame - duck President Tyler, still trying to annex Texas in the final months of his administration, wished to avoid another overwhelming Senate rejection of his treaty. In his annual address to Congress on December 4, he declared the Polk victory a mandate for Texas annexation and proposed that Congress adopt a joint resolution procedure by which simple majorities in each house could secure ratification for the Tyler treaty. This method would avoid the constitutional requirement of a two - thirds majority in the Senate. Bringing the House of Representatives into the equation boded well for Texas annexation, as the pro-annexation Democratic Party possessed nearly a 2: 1 majority in that chamber.
By resubmitting the discredited treaty through a House - sponsored bill, the Tyler administration reignited sectional hostilities over Texas admission. Both northern Democratic and southern Whig Congressmen had been bewildered by local political agitation in their home states during the 1844 presidential campaigns. Now, northern Democrats found themselves vulnerable to charges of appeasement of their southern wing if they capitulated to Tyler 's slavery expansion provisions. On the other hand, Manifest Destiny enthusiasm in the north placed politicians under pressure to admit Texas immediately to the Union.
Constitutional objections were raised in House debates as to whether both houses of Congress could constitutionally authorize admission of territories, rather than states. Moreover, if the Republic of Texas, a nation in its own right, were admitted as a state, its territorial boundaries, property relations (including slave property), debts and public lands would require a Senate - ratified treaty. Democrats were particularly uneasy about burdening the United States with $10 million in Texas debt, resenting the deluge of speculators, who had bought Texas bonds cheap and now lobbied Congress for the Texas House bill. House Democrats, at an impasse, relinquished the legislative initiative to the southern Whigs.
Anti-Texas Whig legislators had lost more than the White House in the general election of 1844. In the southern states of Tennessee and Georgia, Whig strongholds in the 1840 general election, voter support dropped precipitously due to the pro-annexation excitement in the Deep South -- and Clay lost every Deep South state to Polk. Northern Whigs ' uncompromising hostility to slavery expansion increasingly characterized the party, and southern members, by association, had suffered from charges of being "soft on Texas, therefore soft on slavery '' by Southern Democrats. Facing congressional and gubernatorial races in 1845 in their home states, a number of Southern Whigs sought to erase that impression with respect to the Tyler - Texas bill.
Southern Whigs in the Congress, including Representative Milton Brown and Senator Ephraim Foster, both of Tennessee, and Representative Alexander Stephens of Georgia collaborated to introduce a House amendment on January 13, 1845 that was designed to enhance slaveowner gains in Texas beyond those offered by the Democratic - sponsored Tyler - Calhoun treaty bill. The legislation proposed to recognize Texas as a slave state which would retain all its vast public lands, as well as its bonded debt accrued since 1836. Furthermore, the Brown amendment would delegate to the U.S. government responsibility for negotiating the disputed Texas - Mexico boundary. The issue was a critical one, as the size of Texas would be immensely increased if the international border were set at the Rio Grande River, with its headwaters in the Rocky Mountains, rather than the traditionally recognized boundary at the Nueces River, 100 miles to the north. While the Tyler - Calhoun treaty provided for the organization of a total of four states from the Texas lands -- three likely to qualify as slave states -- Brown 's plan would permit Texas state lawmakers to configure a total of five states from its western region, south of the 36 ° 30 ' Missouri Compromise line, each pre-authorized to permit slavery upon statehood, if Texas designated them as such.
Politically, the Brown amendment was designed to portray Southern Whigs as "even more ardent champions of slavery and the South, than southern Democrats. '' The bill also served to distinguish them from their northern Whig colleagues who cast the controversy, as Calhoun did, in strictly pro - versus anti-slavery terms. While almost all Northern Whigs spurned Brown 's amendment, the Democrats quickly co-opted the legislation, providing the votes necessary to attach the proviso to Tyler 's joint resolution, by a 118 -- 101 vote. Southern Democrats supported the bill almost unanimously (59 -- 1), while Northern Democrats split strongly in favor (50 -- 30). Eight of eighteen Southern Whigs cast their votes in favor. Northern Whigs unanimously rejected it. The House proceeded to approve the amended Texas treaty 120 -- 98 on January 25, 1845. The vote in the House had been one in which party affiliation prevailed over sectional allegiance. The bill was forwarded the same day to the Senate for debate.
By early February 1845, when the Senate began to debate the Brown - amended Tyler treaty, its passage seemed unlikely, as support was "perishing ''. The partisan alignments in the Senate were near parity, 28 -- 24, slightly in favor of the Whigs. The Senate Democrats would require undivided support among their colleagues, and three or more Whigs who would be willing to cross party lines to pass the House - amended treaty. The fact that Senator Foster had drafted the House amendment under consideration improved prospects of Senate passage.
Anti-annexation Senator Thomas Hart Benton of Missouri had been the only Southern Democrat to vote against the Tyler - Texas measure in June 1844. His original proposal for an annexed Texas had embodied a national compromise, whereby Texas would be divided in two, half slave - soil and half free - soil. As pro-annexation sentiment grew in his home state, Benton retreated from this compromise offer. By February 5, 1845, in the early debates on the Brown - amended House bill, he advanced an alternative resolution that, unlike the Brown scenario, made no reference whatsoever to the ultimate free - slave apportionment of an annexed Texas and simply called for five bipartisan commissioners to resolve border disputes with Texas and Mexico and set conditions for the Lone Star Republic 's acquisition by the United States.
The Benton proposal was intended to calm northern anti-slavery Democrats (who wished to eliminate the Tyler - Calhoun treaty altogether, as it had been negotiated on behalf of the slavery expansionists), and allow the decision to devolve upon the soon - to - be-inaugurated Democratic President - elect James K. Polk. President - elect Polk had expressed his ardent wish that Texas annexation should be accomplished before he entered Washington in advance of his inauguration on March 4, 1845, the same day Congress would end its session. With his arrival in the capital, he discovered the Benton and Brown factions in the Senate "paralyzed '' over the Texas annexation legislation. On the advice of his soon - to - be Secretary of the Treasury Robert J. Walker, Polk urged Senate Democrats to unite under a dual resolution that would include both the Benton and Brown versions of annexation, leaving enactment of the legislation to Polk 's discretion when he took office. In private and separate talks with supporters of both the Brown and Benton plans, Polk left each side with the "impression he would administer their (respective) policy. Polk meant what he said to Southerners and meant to appear friendly to the Van Burenite faction. '' Polk 's handling of the matter had the effect of uniting Senate northern Democrats in favor of the dual alternative treaty bill.
On February 27, 1845, less than a week before Polk 's inauguration, the Senate voted 27 -- 25 to admit Texas, based on the Tyler protocols of simple majority passage. All twenty - four Democrats voted for the measure, joined by three southern Whigs. Benton and his allies were assured that Polk would act to establish the eastern portion of Texas as a slave state; the western section was to remain unorganized territory, not committed to slavery. On this understanding, the northern Democrats had conceded their votes for the dichotomous bill. The next day, in an almost strict party line vote, the Benton - Milton measure was passed in the Democrat - controlled House of Representatives. President Tyler signed the bill the following day, March 1, 1845 (Joint Resolution for annexing Texas to the United States, J. Res. 8, enacted March 1, 1845, 5 Stat. 797).
Senate and house legislators who had favored Benton 's renegotiated version of the Texas annexation bill had been assured that President Tyler would sign the joint house measure, but leave its implementation to the incoming Polk administration. But, during his last day in office, President Tyler, with the urging of his Secretary of State Calhoun, decided to act decisively to improve the odds for the immediate annexation of Texas. On March 3, 1845, with his cabinet 's assent, he dispatched an offer of annexation to the Republic of Texas by courier, exclusively under the terms of the Brown -- Foster option of the joint house measure. Secretary Calhoun apprised President - elect Polk of the action, who demurred without comment. Tyler justified his preemptive move on the grounds that Polk was likely to come under pressure to abandon immediate annexation and reopen negotiations under the Benton alternative.
When President Polk took office on March 4, he was in a position to recall Tyler 's dispatch to Texas and reverse his decision. On March 10, after conferring with his cabinet, Polk upheld Tyler 's action and allowed the courier to proceed with the offer of immediate annexation to Texas. The only modification was to exhort Texans to accept the annexation terms unconditionally. Polk 's decision was based on his concern that a protracted negotiation by US commissioners would expose annexation efforts to foreign intrigue and interference. While Polk kept his annexation endeavors confidential, Senators passed a resolution requesting formal disclosure of the administration 's Texas policy. Polk stalled, and when the Senate special session had adjourned on March 20, 1845, no names for US commissioners to Texas had been submitted by him. Polk denied charges from Senator Benton that he had misled Benton on his intention to support the new negotiations option, declaring "if any such pledges were made, it was in a total misconception of what I said or meant. ''
On May 5, 1845, Texas President Jones called for a convention on July 4, 1845, to consider the annexation and a constitution. On June 23, the Texan Congress accepted the US Congress 's joint resolution of March 1, 1845, annexing Texas to the United States, and consented to the convention. On July 4, the Texas convention debated the annexation offer and almost unanimously passed an ordinance assenting to it. The convention remained in session through August 28, and adopted the Constitution of Texas on August 27, 1845. The citizens of Texas approved the annexation ordinance and new constitution on October 13, 1845.
President Polk signed the legislation making the former Lone Star Republic a state of the Union on December 29, 1845 (Joint Resolution for the admission of the state of Texas into the Union, J. Res. 1, enacted December 29, 1845, 9 Stat. 108). Texas formally relinquished its sovereignty to the United States on February 14, 1846.
The formal controversy over the legality of the annexation of Texas stems from the fact that Congress approved the annexation of Texas as a state, rather than a territory, with simple majorities in each house, instead of annexing the land by Senate treaty, as was done with Native American lands. Tyler 's extralegal joint resolution maneuver in 1844 exceeded strict constructionist precepts, but was passed by Congress in 1845 as part of a compromise bill. The success of the joint house Texas annexation set a precedent that would be applied to Hawaii 's annexation in 1897.
Republican President Benjamin Harrison (1889 -- 1893) attempted, in 1893, to annex Hawaii through a Senate treaty. When this failed, he was asked to consider the Tyler joint house precedent; he declined. Democratic President Grover Cleveland (1893 -- 1897) did not pursue the annexation of Hawaii. When President William McKinley took office in 1897, he quickly revived expectations among territorial expansionists when he resubmitted legislation to acquire Hawaii. When the two - thirds Senate support was not forthcoming, committees in the House and Senate explicitly invoked the Tyler precedent for the joint house resolution, which was successfully applied to approve the annexation of Hawaii in July 1898.
|
who won the last hot dog eating contest | Nathan 's hot dog Eating contest - wikipedia
The Nathan 's Hot Dog Eating Contest is an annual American hot dog competitive eating competition. It is held each year on Independence Day at Nathan 's Famous Corporation 's original, and best - known restaurant at the corner of Surf and Stillwell Avenues in Coney Island, a neighborhood of Brooklyn, New York City.
The contest has gained public attention in recent years due to the stardom of Takeru Kobayashi and Joey Chestnut. The defending champion is Joey Chestnut, who ate 72 hot dogs in the 2017 contest. He beat out Carmen Cincotti and the 2015 champ, Matt Stonie.
Major League Eating (MLE), formerly known as the International Federation of Competitive Eating (IFOCE), has sanctioned the event since 1997. Today, only entrants currently under contract by MLE can compete in the contest.
The field of about 20 contestants typically includes the following:
The competitors stand on a raised platform behind a long table with drinks and Nathan 's Famous hot dogs in buns. Most contestants have water on hand, but other kinds of drinks can and have been used. Condiments are allowed, but usually are not used. The hot dogs are allowed to cool slightly after grilling to prevent possible mouth burns. The contestant that consumes (and keeps down) the most hot dogs and buns (HDB) in ten minutes is declared the winner. The length of the contest has changed over the years, previously 12 minutes, and in some years, only three and a half minutes; since 2008, 10 minutes.
Spectators watch and cheer the eaters on from close proximity. A designated scorekeeper is paired with each contestant, flipping a number board counting each hot dog consumed. Partially eaten hot dogs count and the granularity of measurement is eighths of a length. Hot dogs still in the mouth at the end of regulation count if they are subsequently swallowed. Yellow penalty cards can be issued for "messy eating, '' and red penalty cards can be issued for "reversal of fortune '', which results in disqualification. If there is a tie, the contestants go to a 5 - hot - dog eat - off to see who can eat that many more quickly. Further ties will result in a sudden - death eat - off of eating one more hot dog in the fastest time.
After the winner is declared, a plate showing the number of hot dogs eaten by the winner is brought out for photo opportunities.
The winner of the men 's competition is given possession of the coveted international "bejeweled '' mustard - yellow belt. The belt is of "unknown age and value '' according to IFOCE co-founder George Shea and rests in the country of its owner. In 2011, Sonya Thomas won the inaugural women 's competition and its "bejeweled '' pink belt.
Various other prizes have been awarded over the years. For example, in 2004 Orbitz donated a travel package to the winner. Starting in 2007, cash prizes have been awarded to the top finishers.
The Nathan 's Hot Dog Eating Contest has been held at the original location on Coney Island most years since about 1972, usually in conjunction with Independence Day. According to legend, on July 4, 1916, four immigrants held a hot dog eating contest at Nathan 's Famous stand on Coney Island to settle an argument about who was the most patriotic. The contest has supposedly been held each year since then except 1941 ("as a protest to the war in Europe '') and 1971 (as a protest to political unrest in the U.S.). A man by the name of Jim Mullen is said to have won the first contest, although accounts vary. One account describes Jimmy Durante (who was not an immigrant) as competing in that all - immigrant inaugural contest, which was judged by Eddie Cantor and Sophie Tucker. Another describes the event as beginning "in 1917, and pitted Mae West 's father, Jack, against entertainer Eddie Cantor. '' In 2010, however, promoter Mortimer "Morty '' Matz admitted to having fabricated the legend of the 1916 start date with a man named Max Rosey in the early 1970s as part of a publicity stunt. The legend grew over the years, to the point where The New York Times and other publications were known to have repeatedly listed 1916 as the inaugural year, although no evidence of the contest exists. As Coney Island is often linked with recreational activities of the summer season, several early contests were held on other holidays associated with summer besides Independence Day; Memorial Day contests were scheduled for 1972, 1975, and 1978, and a second 1972 event was held on Labor Day.
In the late 1990s and early 2000s, the competition was dominated by Japanese contestants, particularly Takeru Kobayashi, who won six consecutive contests from 2001 - 2006. In 2001, Kobayashi transformed the competition and the world of competitive eating by downing 50 hot dogs -- smashing the previous record (25.5). The Japanese eater introduced advanced eating and training techniques that shattered previous competitive eating world records. The rise in popularity of the event coincided with the surge in popularity of the worldwide competitive eating circuit.
On July 4, 2011, Sonya Thomas became the champion of the first Nathan 's Hot Dog Eating Contest for Women (previously women had competed against the men, except in one competition that was apparently held in 1975). Eating 40 hot dogs in 10 minutes, Thomas earned the inaugural Pepto - Bismol - sponsored pink belt and won $10,000.
In recent years, a considerable amount of pomp and circumstance have surrounded the days leading up to the event, which has become an annual spectacle of competitive entertainment. The event is presented on an extravagant stage complete with colorful live announcers and an overall party atmosphere. The day before the contest is a public weigh - in with the mayor of New York City. Some competitors don flamboyant costumes and / or makeup, while others may promote themselves with eating - related nicknames. On the morning of the event, they have a heralded arrival to Coney Island on the "bus of champions '' and are called to the stage individually during introductions. In 2013, six - time defending champion Joey Chestnut was escorted to the stage in a sedan chair.
The competition draws many spectators and worldwide press coverage. In 2007, an estimated 50,000 came out to witness the event. In 2004 a three - story - high "Hot Dog Eating Wall of Fame '' was erected at the site of the annual contest. The wall lists past winners, and has a digital clock which counts down the minutes until the next contest. Despite substantial damage suffered at Nathan 's due to Hurricane Sandy in October 2012, the location was repaired, reopened, and the 2013 event was held as scheduled.
ESPN has long enjoyed solid ratings from its broadcast of the Hot Dog Eating Contest on Independence Day, and on July 1, 2014, the network announced it had extended its agreement with Major League Eating and will broadcast the contest through 2024. The event continues to be recognized for its power as a marketing tool.
Controversies usually revolve around supposed breaches of rules that are missed by the judges. For example, NY1 television news editor Phil Ellison reviewed taped footage of the 1999 contest and thought that Steve Keiner started eating at the count of one, but the judge, Mike DeVito -- himself the champion of the 1990, 1993, and 1994 contests -- was stationed directly in front of Keiner and disputed it, saying it was incorrect. Keiner ate 21 1 / 2 dogs, as shown on the Wall of Fame located at Nathan 's flagship store at the corner of Surf and Stillwell avenues in Coney Island. This controversy was created by George Shea, the chief publicist for Nathan 's, because it created much more publicity for the contest. Shea assured Keiner at the end of the contest that he would clear the confusion up but never did. Keiner never participated in any advertising or contests set up by Shea because of this.
Another controversy occurred in 2003 when former NFL player William "The Refrigerator '' Perry competed as a celebrity contestant. Though he had won a qualifier by eating twelve hot dogs, he ate only four at the contest, stopping after just five minutes. Shea stated that the celebrity contestant experiment will likely not be repeated.
At the 2007 contest, the results were delayed to review whether defending champion Takeru Kobayashi had vomited (also known as a "Roman method incident '' or "reversal of fortune '') in the final seconds of regulation. Such an incident results in the disqualification of the competitor under the rules of the IFOCE. The judges ruled in Kobayashi 's favor. A similar incident occurred involving Kobayashi in 2002 in a victory over Eric "Badlands '' Booker.
Takeru Kobayashi has not competed in the contest since 2009 due to his refusal to sign an exclusive contract with Major League Eating, which is the current sanctioning body of the contest. In 2010, he was arrested by police after attempting to jump on the stage after the contest was over and disrupt the proceedings. On August 5, 2010, all charges against Kobayashi were dismissed by a judge in Brooklyn. Despite his six consecutive victories in their annual event, Nathan 's removed Kobayashi 's image from their "Wall of Fame '' in 2011. Kobayashi again refused to compete in 2011, but instead conducted his own hot dog eating exhibition, claiming to have consumed 69 HDB, seven more than Joey Chestnut accomplished in the Nathan 's contest. The sports website Deadspin deemed Kobayashi 's solo appearance "an improbably perfect ' up yours ' to the Nathan 's hot dog eating contest. ''.
* -- Note: though Walter Paul 's 1967 feat is documented in at least two UPI press accounts from the time, he has also been mentioned in passing in more recent press accounts for supposedly establishing the contest 's then - record 17 hot dogs consumed; several other people have similarly been credited for records of 131⁄2, 171⁄2, or 181⁄2 hot dogs consumed. The following feats are not known to be documented more fully in press accounts from the time of their occurrence and, as such, may not be credible and are not included in the Results table above:
"Several years '' before 1986: unspecified contestant, 131⁄2 1979: unspecified contestant, 171⁄2 1978: Walter Paul (described as being from Coney Island, Brooklyn), 17 1968: Walter Paul (described as "a rotund Coney Island carnival caretaker ''), 17 1959: Peter Washburn (described as "a one - armed Brooklyn Carnival worker ''), 181⁄2 or 17 1959: Paul Washburn (described as a carnival worker from Brooklyn), 171⁄2 1959: Walter Paul (described as a 260 - pound man from Brooklyn), 17 1957: Paul Washburn, 171⁄2
In 2003, ESPN aired the contest for the first time on a tape - delayed basis. Starting in 2004, ESPN began airing the contest live. Since 2005, Paul Page has been ESPN 's play - by - play announcer for the event, accompanied by color commentator Richard Shea. In 2011, the women 's competition was carried live on ESPN3, followed by the men 's competition on ESPN. In 2012, ESPN signed an extension to carry the event through 2017. In 2014, ESPN signed an agreement to carry the competition on its networks for 10 years until 2024.
The Nathan 's contest has been featured in these documentaries and TV programs:
News sources typically use puns in head - lines and copy referring to the contest, such as "' Tsunami ' is eating contest 's top dog again, '' "could n't cut the mustard '' (A.P.), "Nathan 's King ready, with relish '' (Daily News) and "To be frank, Fridge faces a real hot - dog consumer '' (ESPN).
Reporter Gersh Kuntzman of the New York Post has been covering the event since the early 1990s and has been a judge at the competition since 2000. Darren Rovell, of ESPN, has competed in a qualifier.
Each contestant has his or her own eating method. Takeru Kobayashi pioneered the "Solomon Method '' at his first competition in 2001. The Solomon method consists of breaking each hot dog in half, eating the two halves at once, and then eating the bun.
"Dunking '' is the most prominent method used today. Because buns absorb water, many contestants dunk the buns in water and squeeze them to make them easier to swallow, and slide down the throat more efficiently.
Other methods used include the "Carlene Pop, '' where the competitor jumps up and down while eating, to force the food down to the stomach. "Buns & Roses '' is a similar trick, but the eater sways from side to side instead. "Juliet - ing '' is a cheating method in which players simply throw the hot dog buns over their shoulders.
Contestants train and prepare for the event in different ways. Some fast, others prefer liquid - only diets before the event. Takeru Kobayashi meditates, drinks water and eats cabbage, then fasts before the event. Several contestants, such as Ed "Cookie '' Jarvis, aim to be "hungry, but not too hungry '' and have a light breakfast the morning of the event.
|
how many keys on a concert grand piano | Piano - wikipedia
The piano is an acoustic, stringed musical instrument invented in Italy by Bartolomeo Cristofori around the year 1700 (the exact year is uncertain), in which the strings are struck by hammers. It is played using a keyboard, which is a row of keys (small levers) that the performer presses down or strikes with the fingers and thumbs of both hands to cause the hammers to strike the strings. The word piano is a shortened form of pianoforte, the Italian term for the early 1700s versions of the instrument, which in turn derives from gravicembalo col piano e forte and fortepiano. The Italian musical terms piano and forte indicate "soft '' and "loud '' respectively, in this context referring to the variations in volume (i.e., loudness) produced in response to a pianist 's touch or pressure on the keys: the greater the velocity of a key press, the greater the force of the hammer hitting the strings, and the louder the sound of the note produced and the stronger the attack. The first fortepianos in the 1700s had a quieter sound and smaller dynamic range.
An acoustic piano usually has a protective wooden case surrounding the soundboard and metal strings, which are strung under great tension on a heavy metal frame. Pressing one or more keys on the piano 's keyboard causes a padded hammer (typically padded with firm felt) to strike the strings. The hammer rebounds from the strings, and the strings continue to vibrate at their resonant frequency. These vibrations are transmitted through a bridge to a soundboard that amplifies by more efficiently coupling the acoustic energy to the air. When the key is released, a damper stops the strings ' vibration, ending the sound. Notes can be sustained, even when the keys are released by the fingers and thumbs, by the use of pedals at the base of the instrument. The sustain pedal enables pianists to play musical passages that would otherwise be impossible, such as sounding a 10 - note chord in the lower register and then, while this chord is being continued with the sustain pedal, shifting both hands to the treble range to play a melody and arpeggios over the top of this sustained chord. Unlike the pipe organ and harpsichord, two major keyboard instruments widely used before the piano, the piano allows gradations of volume and tone according to how forcefully a performer presses or strikes the keys.
Most modern pianos have a row of 88 black and white keys, 52 white keys for the notes of the C major scale (C, D, E, F, G, A and B) and 36 shorter black keys, which are raised above the white keys, and set further back on the keyboard. This means that the piano can play 88 different pitches (or "notes ''), going from the deepest bass range to the highest treble. The black keys are for the "accidentals '' (F ♯ / G ♭, G ♯ / A ♭, A ♯ / B ♭, C ♯ / D ♭, and D ♯ / E ♭), which are needed to play in all twelve keys. More rarely, some pianos have additional keys (which require additional strings). Most notes have three strings, except for the bass that graduates from one to two. The strings are sounded when keys are pressed or struck, and silenced by dampers when the hands are lifted from the keyboard. Although an acoustic piano has strings, it is usually classified as a percussion instrument rather than as a stringed instrument, because the strings are struck rather than plucked (as with a harpsichord or spinet); in the Hornbostel -- Sachs system of instrument classification, pianos are considered chordophones. There are two main types of piano: the grand piano and the upright piano. The grand piano is used for Classical solos, chamber music, and art song, and it is often used in jazz and pop concerts. The upright piano, which is more compact, is the most popular type, as it is a better size for use in private homes for domestic music - making and practice.
During the 1800s, influenced by the musical trends of the Romantic music era, innovations such as the cast iron frame (which allowed much greater string tensions) and aliquot stringing gave grand pianos a more powerful sound, with a longer sustain and richer tone. In the nineteenth century, a family 's piano played the same role that a radio or phonograph played in the twentieth century; when a nineteenth - century family wanted to hear a newly published musical piece or symphony, they could hear it by having a family member play it on the piano. During the nineteenth century, music publishers produced many musical works in arrangements for piano, so that music lovers could play and hear the popular pieces of the day in their home. The piano is widely employed in classical, jazz, traditional and popular music for solo and ensemble performances, accompaniment, and for composing, songwriting and rehearsals. Although the piano is very heavy and thus not portable and is expensive (in comparison with other widely used accompaniment instruments, such as the acoustic guitar), its musical versatility (i.e., its wide pitch range, ability to play chords with up to 10 notes, louder or softer notes and two or more independent musical lines at the same time), the large number of musicians and amateurs trained in playing it, and its wide availability in performance venues, schools and rehearsal spaces have made it one of the Western world 's most familiar musical instruments. With technological advances, amplified electric pianos (1929), electronic pianos (1970s), and digital pianos (1980s) have also been developed. The electric piano became a popular instrument in the 1960s and 1970s genres of jazz fusion, funk music and rock music.
The piano was founded on earlier technological innovations in keyboard instruments. Pipe organs have been used since Antiquity, and as such, the development of pipe organs enabled instrument builders to learn about creating keyboard mechanisms for sounding pitches. The first string instruments with struck strings were the hammered dulcimers, which were used since the Middle Ages in Europe. During the Middle Ages, there were several attempts at creating stringed keyboard instruments with struck strings. By the 17th century, the mechanisms of keyboard instruments such as the clavichord and the harpsichord were well developed. In a clavichord, the strings are struck by tangents, while in a harpsichord, they are mechanically plucked by quills when the performer depresses the key. Centuries of work on the mechanism of the harpsichord in particular had shown instrument builders the most effective ways to construct the case, soundboard, bridge, and mechanical action for a keyboard intended to sound strings.
The invention of the piano is credited to Bartolomeo Cristofori (1655 -- 1731) of Padua, Italy, who was employed by Ferdinando de ' Medici, Grand Prince of Tuscany, as the Keeper of the Instruments. Cristofori was an expert harpsichord maker, and was well acquainted with the body of knowledge on stringed keyboard instruments. He used his knowledge of harpsichord keyboard mechanisms and actions to help him to develop the first pianos. It is not known exactly when Cristofori first built a piano. An inventory made by his employers, the Medici family, indicates the existence of a piano by the year 1700; another document of doubtful authenticity indicates a date of 1698. The three Cristofori pianos that survive today date from the 1720s. Cristofori named the instrument un cimbalo di cipresso di piano e forte ("a keyboard of cypress with soft and loud ''), abbreviated over time as pianoforte, fortepiano, and later, simply, piano.
While the clavichord allowed expressive control of volume and sustain, it was too quiet for large performances in big halls. The harpsichord produced a sufficiently loud sound, especially when a coupler was used to sound both manuals of a two - manual harpsichord, but it offered no dynamic or accent - based expressive control over each note. A harpsichord could not produce a variety of dynamic levels from the same keyboard during a musical passage (although a harpischord with two manuals could be used to alternate between two different stops (settings on the harpsichord which determined which set of strings are sounded), which could include a louder stop and a quieter stop). The piano offered the best features of both instruments, combining the ability to play loudly and perform sharp accents, which enabled the piano to project more during piano concertos and play in larger venues, with dynamic control that permitted a range of dynamics, including soft, quiet playing.
Cristofori 's great success was solving, with no known prior example, the fundamental mechanical problem of designing a stringed keyboard instrument in which the notes are struck by a hammer. The hammer must strike the string, but not remain in contact with it, because this would damp the sound and stop the string from vibrating and making sound. This means that after striking the string, the hammer must be lifted or raised off the strings. Moreover, the hammer must return to its rest position without bouncing violently, and it must return to a position in which it is ready to play almost immediately after its key is depressed so the player can repeat the same note rapidly. Cristofori 's piano action was a model for the many approaches to piano actions that followed in the next century. Cristofori 's early instruments were made with thin strings, and were much quieter than the modern piano, but they were much louder and with more sustain in comparison to the clavichord -- the only previous keyboard instrument capable of dynamic nuance via the weight or force with which the keyboard is played.
Cristofori 's new instrument remained relatively unknown until an Italian writer, Scipione Maffei, wrote an enthusiastic article about it in 1711, including a diagram of the mechanism, that was translated into German and widely distributed. Most of the next generation of piano builders started their work based on reading the article. One of these builders was Gottfried Silbermann, better known as an organ builder. Silbermann 's pianos were virtually direct copies of Cristofori 's, with one important addition: Silbermann invented the forerunner of the modern sustain pedal, which lifts all the dampers from the strings simultaneously. This allows the pianist to sustain the notes that they have depressed even after their fingers are no longer pressing down the keys. This innovation enabled pianists to, for example, play a loud chord with both hands in the lower register of the instrument, sustain the chord with the sustain pedal, and then, with the chord continuing to sound, relocate their hands to a different register of the keyboard in preparation for a subsequent section.
Silbermann showed Johann Sebastian Bach one of his early instruments in the 1730s, but Bach did not like the instrument at that time, claiming that the higher notes were too soft to allow a full dynamic range. Although this earned him some animosity from Silbermann, the criticism was apparently heeded. Bach did approve of a later instrument he saw in 1747, and even served as an agent in selling Silbermann 's pianos. "Instrument: piano et forte genandt '' -- a reference to the instrument 's ability to play soft and loud -- was an expression that Bach used to help sell the instrument when he was acting as Silbermann 's agent in 1749.
Piano - making flourished during the late 18th century in the Viennese school, which included Johann Andreas Stein (who worked in Augsburg, Germany) and the Viennese makers Nannette Streicher (daughter of Stein) and Anton Walter. Viennese - style pianos were built with wood frames, two strings per note, and leather - covered hammers. Some of these Viennese pianos had the opposite coloring of modern - day pianos; the natural keys were black and the accidental keys white. It was for such instruments that Wolfgang Amadeus Mozart composed his concertos and sonatas, and replicas of them are built in the 21st century for use in authentic - instrument performance of his music. The pianos of Mozart 's day had a softer, more ethereal tone than 21st century pianos or English pianos, with less sustaining power. The term fortepiano has in modern times come to be used to distinguish these early instruments (and modern re-creations of them) from later pianos.
In the period from about 1790 to 1860, the Mozart - era piano underwent tremendous changes that led to the modern form of the instrument. This revolution was in response to a preference by composers and pianists for a more powerful, sustained piano sound, and made possible by the ongoing Industrial Revolution with resources such as high - quality piano wire for strings, and precision casting for the production of massive iron frames that could withstand the tremendous tension of the strings. Over time, the tonal range of the piano was also increased from the five octaves of Mozart 's day to the seven octave (or more) range found on modern pianos.
Early technological progress in the late 1700s owed much to the firm of Broadwood. John Broadwood joined with another Scot, Robert Stodart, and a Dutchman, Americus Backers, to design a piano in the harpsichord case -- the origin of the "grand ''. They achieved this in about 1777. They quickly gained a reputation for the splendour and powerful tone of their instruments, with Broadwood constructing pianos that were progressively larger, louder, and more robustly constructed. They sent pianos to both Joseph Haydn and Ludwig van Beethoven, and were the first firm to build pianos with a range of more than five octaves: five octaves and a fifth during the 1790s, six octaves by 1810 (Beethoven used the extra notes in his later works), and seven octaves by 1820. The Viennese makers similarly followed these trends; however the two schools used different piano actions: Broadwoods used a more robust action, whereas Viennese instruments were more sensitive.
By the 1820s, the center of piano innovation had shifted to Paris, where the Pleyel firm manufactured pianos used by Frédéric Chopin and the Érard firm manufactured those used by Franz Liszt. In 1821, Sébastien Érard invented the double escapement action, which incorporated a repetition lever (also called the balancier) that permitted repeating a note even if the key had not yet risen to its maximum vertical position. This facilitated rapid playing of repeated notes, a musical device exploited by Liszt. When the invention became public, as revised by Henri Herz, the double escapement action gradually became standard in grand pianos, and is still incorporated into all grand pianos currently produced in the 2000s. Other improvements of the mechanism included the use of firm felt hammer coverings instead of layered leather or cotton. Felt, which was first introduced by Jean - Henri Pape in 1826, was a more consistent material, permitting wider dynamic ranges as hammer weights and string tension increased. The sostenuto pedal (see below), invented in 1844 by Jean - Louis Boisselot and copied by the Steinway firm in 1874, allowed a wider range of effects, such as playing a 10 note chord in the bass range, sustaining it with the pedal, and then moving both hands over to the treble range to play a two - hand melody or sequence of arpeggios.
One innovation that helped create the powerful sound of the modern piano was the use of a massive, strong, cast iron frame. Also called the "plate '', the iron frame sits atop the soundboard, and serves as the primary bulwark against the force of string tension that can exceed 20 tons (180 kilonewtons) in a modern grand. The single piece cast iron frame was patented in 1825 in Boston by Alpheus Babcock, combining the metal hitch pin plate (1821, claimed by Broadwood on behalf of Samuel Hervé) and resisting bars (Thom and Allen, 1820, but also claimed by Broadwood and Érard). Babcock later worked for the Chickering & Mackays firm who patented the first full iron frame for grand pianos in 1843. Composite forged metal frames were preferred by many European makers until the American system was fully adopted by the early 20th century. The increased structural integrity of the iron frame allowed the use of thicker, tenser, and more numerous strings. In 1834, the Webster & Horsfal firm of Birmingham brought out a form of piano wire made from cast steel; according to Dolge it was "so superior to the iron wire that the English firm soon had a monopoly. '' But a better steel wire was soon created in 1840 by the Viennese firm of Martin Miller, and a period of innovation and intense competition ensued, with rival brands of piano wire being tested against one another at international competitions, leading ultimately to the modern form of piano wire.
Other important advances included changes to the way the piano is strung, such as the use of a "choir '' of three strings rather than two for all but the lowest notes, and the implementation of an over-strung scale, in which the strings are placed in two separate planes, each with its own bridge height. (This is also called cross-stringing. Whereas earlier instruments ' bass strings were a mere continuation of a single string plane, over-stringing placed the bass bridge behind and to the treble side of the tenor bridge area. This crossed the strings, with the bass strings in the higher plane.) This permitted a much narrower cabinet at the "nose '' end of the piano, and optimized the transition from unwound tenor strings to the iron or copper - wrapped bass strings. Over-stringing was invented by Pape during the 1820s, and first patented for use in grand pianos in the United States by Henry Steinway, Jr. in 1859.
Some piano makers developed schemes to enhance the tone of each note. Julius Blüthner developed Aliquot stringing in 1893 as well as Pascal Taskin (1788), and Collard & Collard (1821). These systems were used to strengthen the tone of the highest register of notes on the piano, which up till this time were viewed as being too weak - sounding. Each used more distinctly ringing, undamped vibrations of sympathetically vibrating strings to add to the tone, except the Blüthner Aliquot stringing, which uses an additional fourth string in the upper two treble sections. While the hitchpins of these separately suspended Aliquot strings are raised slightly above the level of the usual tri-choir strings, they are not struck by the hammers but rather are damped by attachments of the usual dampers. Eager to copy these effects, Theodore Steinway invented duplex scaling, which used short lengths of non-speaking wire bridged by the "aliquot '' throughout much of upper the range of the piano, always in locations that caused them to vibrate sympathetically in conformity with their respective overtones -- typically in doubled octaves and twelfths. The mechanical action structure of the upright piano was invented in London, England in 1826 by Robert Wornum, and upright models became the most popular model. Upright pianos took less space than a grand piano, and as such they were a better size for use in private homes for domestic music - making and practice.
Some early pianos had shapes and designs that are no longer in use. The square piano (not truly square, but rectangular) was cross strung at an extremely acute angle above the hammers, with the keyboard set along the long side. This design is attributed to Gottfried Silbermann or Christian Ernst Friderici on the continent, and Johannes Zumpe or Harman Vietor in England, and it was improved by changes first introduced by Guillaume - Lebrecht Petzold in France and Alpheus Babcock in the United States. Square pianos were built in great numbers through the 1840s in Europe and the 1890s in the United States, and saw the most visible change of any type of piano: the iron - framed, over-strung squares manufactured by Steinway & Sons were more than two - and - a-half times the size of Zumpe 's wood - framed instruments from a century before. Their overwhelming popularity was due to inexpensive construction and price, although their tone and performance were limited by narrow soundboards, simple actions and string spacing that made proper hammer alignment difficult.
The tall, vertically strung upright grand was arranged like a grand set on end, with the soundboard and bridges above the keys, and tuning pins below them. The term was later revived by many manufacturers for advertising purposes. "Giraffe pianos '', "pyramid pianos '' and "lyre pianos '' were arranged in a somewhat similar fashion, using evocatively shaped cases. The very tall cabinet piano was introduced about 1805 and was built through the 1840s. It had strings arranged vertically on a continuous frame with bridges extended nearly to the floor, behind the keyboard and very large sticker action. The short cottage upright or pianino with vertical stringing, made popular by Robert Wornum around 1815, was built into the 20th century. They are informally called birdcage pianos because of their prominent damper mechanism. The oblique upright, popularized in France by Roller & Blanchet during the late 1820s, was diagonally strung throughout its compass. The tiny spinet upright was manufactured from the mid-1930s until recent times. The low position of the hammers required the use of a "drop action '' to preserve a reasonable keyboard height. Modern upright and grand pianos attained their present, 2000 - era forms by the end of the 19th century. While improvements have been made in manufacturing processes, and many individual details of the instrument continue to receive attention, and a small number of acoustic pianos are produced with MIDI recording and sound module - triggering capabilities, the 19th century was the era of the most dramatic innovations and modifications of the instrument.
Modern acoustic pianos have two basic configurations, the grand piano and the upright piano, with various styles of each. There are also specialized and novelty pianos, electric pianos based on electromechanical designs, electronic pianos that synthesize piano - like tones using oscillators, and digital pianos using digital samples of acoustic piano sounds.
In grand pianos, the frame and strings are horizontal, with the strings extending away from the keyboard. The action lies beneath the strings, and uses gravity as its means of return to a state of rest. There are many sizes of grand piano. A rough generalization distinguishes the concert grand (between 2.2 and 3 meters (7 ft 3 in -- 9 ft 10 in)) from the parlor grand or boudoir grand (1.7 to 2.2 meters (5 ft 7 in -- 7 ft 3 in)) and the smaller baby grand (around 1.5 meters (4 ft 11 in)).
All else being equal, longer pianos with longer strings have larger, richer sound and lower inharmonicity of the strings. Inharmonicity is the degree to which the frequencies of overtones (known as partials or harmonics) sound sharp relative to whole multiples of the fundamental frequency. This results from the piano 's considerable string stiffness; as a struck string decays its harmonics vibrate, not from their termination, but from a point very slightly toward the center (or more flexible part) of the string. The higher the partial, the further sharp it runs. Pianos with shorter and thicker string (i.e., small pianos with short string scales) have more inharmonicity. The greater the inharmonicity, the more the ear perceives it as harshness of tone.
The inharmonicity of piano strings requires that octaves be stretched, or tuned to a lower octave 's corresponding sharp overtone rather than to a theoretically correct octave. If octaves are not stretched, single octaves sound in tune, but double -- and notably triple -- octaves are unacceptably narrow. Stretching a small piano 's octaves to match its inherent inharmonicity level creates an imbalance among all the instrument 's intervallic relationships, not just its octaves. In a concert grand, however, the octave "stretch '' retains harmonic balance, even when aligning treble notes to a harmonic produced from three octaves below. This lets close and widespread octaves sound pure, and produces virtually beatless perfect fifths. This gives the concert grand a brilliant, singing and sustaining tone quality -- one of the principal reasons that full - size grands are used in the concert hall during piano concerto with orchestra. Smaller grands satisfy the space and cost needs of domestic use; as well, they are used in some small teaching studios and smaller performance venues.
Upright pianos, also called vertical pianos, are more compact because the frame and strings are vertical. Upright pianos are generally less expensive than grand pianos. Upright pianos are widely used in churches, community centers, schools, music conservatories and university music programs as rehearsal and practice instruments, and they are popular models for in - home purchase. The hammers move horizontally, and return to their resting position via springs, which are susceptible to degradation. Upright pianos with unusually tall frames and long strings are sometimes called upright grand pianos. Some authors classify modern pianos according to their height and to modifications of the action that are necessary to accommodate the height.
The toy piano, introduced in the 19th century, is a small piano - like instrument, that generally uses round metal rods to produce sound, rather than strings. The US Library of Congress recognizes the toy piano as a unique instrument with the subject designation, Toy Piano Scores: M175 T69. In 1863, Henri Fourneaux invented the player piano, which plays itself from a piano roll. A machine perforates a performance recording into rolls of paper, and the player piano replays the performance using pneumatic devices. Modern equivalents of the player piano include the Bösendorfer CEUS, Yamaha Disklavier and QRS Pianomation, using solenoids and MIDI rather than pneumatics and rolls. A silent piano is an acoustic piano having an option to silence the strings by means of an interposing hammer bar. They are designed for private silent practice, to avoid disturbing others. Edward Ryley invented the transposing piano in 1801. This rare instrument has a lever under the keyboard as to move the keyboard relative to the strings so a pianist can play in a familiar key while the music sounds in a different key.
The minipiano is an instrument patented by the Brasted brothers of the Eavestaff Ltd. piano company in 1934. This instrument has a braceless back, and a soundboard positioned below the keys -- meaning that long metal rods pulled on the levers to make the hammers strike the strings. The first model, known as the Pianette, was unique in that the tuning pins extended through the instrument, so it could be tuned at the front.
The prepared piano, present in some contemporary art music from the 20th and 21st century is a piano with objects placed inside it to alter its sound, or has had its mechanism changed in some other way. The scores for music for prepared piano specify the modifications, for example instructing the pianist to insert pieces of rubber, paper, metal screws, or washers in between the strings. These either mute the strings or alter their timbre. A harpsichord - like sound can be produced by placing or dangling small metal buttons in front of the hammer. Adding an eraser between the bass strings produces a mellow, thumpy sound reminiscent of the plucked double bass. Inserting metal screws or washers can cause the piano to make a jangly sound as these metal items vibrate against the strings. In 1954 a German company exhibited a wire-less piano at the Spring Fair in Frankfurt, Germany that sold for US $ 238. The wires were replaced by metal bars of different alloys that replicated the standard wires when played. A similar concept is used in the electric - acoustic Rhodes piano.
The first electric pianos from the late 1920s used metal strings with a magnetic pickup, an amplifier and a loudspeaker. The electric pianos that became most popular in pop and rock music in the 1960s and 1970s, such as the Fender Rhodes use metal tines in place of strings and use electromagnetic pickups similar to those on an electric guitar. The resulting electrical, analogue signal can then be amplified with a keyboard amplifier or electronically manipulated with effects units. Electric pianos are rarely used in classical music, where the main usage of them is as inexpensive rehearsal or practice instruments in music schools. However, electric pianos, particularly the Fender Rhodes, became important instruments in 1970s funk and jazz fusion and in some rock music genres.
Electronic pianos are non-acoustic; they do not have strings, tines or hammers, but are a type of synthesizer that simulates or imitates piano sounds using oscillators and filters that synthesize the sound of an acoustic piano. They need to be connected to a keyboard amplifier and speaker to produce sound (however, some electronic keyboards have a built - in amp and speaker). Alternatively, a person can practice an electronic piano with headphones to avoid disturbing others.
Digital pianos are also non-acoustic and do not have strings or hammers. They use digital sampling technology to accurately reproduce the acoustic sound of each piano note. They also need to be connected to a power amplifier and speaker to produce sound (however, most digital pianos have a built - in amp and speaker). Alternatively, a person can practice with headphones to avoid disturbing others. Digital pianos can include sustain pedals, weighted or semi-weighted keys, multiple voice options (e.g., sampled or synthesized imitations of electric piano, Hammond organ, violin, etc.), and MIDI interfaces. MIDI inputs and outputs allow a digital piano to be connected to other electronic instruments or musical devices. For example, a digital piano 's MIDI out signal could be connected by a patch cord to a synth module, which would allow the performer to use the keyboard of the digital piano to play modern synthesizer sounds. Early digital pianos tended to lack a full set of pedals but the synthesis software of later models such as the Yamaha Clavinova series synthesised the sympathetic vibration of the other strings (such as when the sustain pedal is depressed) and full pedal sets can now be replicated. The processing power of digital pianos has enabled highly realistic pianos using multi-gigabyte piano sample sets with as many as ninety recordings, each lasting many seconds, for each key under different conditions (e.g., there are samples of each note being struck softly, loudly, with a sharp attack, etc.). Additional samples emulate sympathetic resonance of the strings when the sustain pedal is depressed, key release, the drop of the dampers, and simulations of techniques such as re-pedalling.
Digital, MIDI - equipped, pianos can output a stream of MIDI data, or record and play via a CD ROM or USB flash drive using MIDI format files, similar in concept to a pianola. The MIDI file records the physics of a note rather than its resulting sound and recreates the sounds from its physical properties (e.g., which note was struck and with what velocity). Computer based software, such as Modartt 's 2006 Pianoteq, can be used to manipulate the MIDI stream in real time or subsequently to edit it. This type of software may use no samples but synthesize a sound based on aspects of the physics that went into the creation of a played note.
In the 2000s, some pianos include an acoustic grand piano or upright piano combined with MIDI electronic features. Such a piano can be played acoustically, or the keyboard can be used as a MIDI controller, which can trigger a synthesizer module or music sampler. Some electronic feature - equipped pianos such as the Yamaha Disklavier electronic player piano, introduced in 1987, are outfitted with electronic sensors for recording and electromechanical solenoids for player piano - style playback. Sensors record the movements of the keys, hammers, and pedals during a performance, and the system saves the performance data as a Standard MIDI File (SMF). On playback, the solenoids move the keys and pedals and thus reproduce the original performance. Modern Disklaviers typically include an array of electronic features, such as a built - in tone generator for playing back MIDI accompaniment tracks, speakers, MIDI connectivity that supports communication with computing devices and external MIDI instruments, additional ports for audio and SMPTE I / O, and Internet connectivity. Disklaviers have been manufactured in the form of upright, baby grand, and grand piano styles (including a nine - foot concert grand). Reproducing systems have ranged from relatively simple, playback - only models to the PRO models which record performance data at resolutions that exceed the limits of normal MIDI data. The unit mounted under the keyboard of the piano can play MIDI or audio software on its CD or floppy disk drive.
Pianos can have upwards of 12,000 individual parts, supporting six functional features: keyboard, hammers, dampers, bridge, soundboard, and strings. Many parts of a piano are made of materials selected for strength and longevity. This is especially true of the outer rim. It is most commonly made of hardwood, typically hard maple or beech, and its massiveness serves as an essentially immobile object from which the flexible soundboard can best vibrate. According to Harold A. Conklin, the purpose of a sturdy rim is so that, "... the vibrational energy will stay as much as possible in the soundboard instead of dissipating uselessly in the case parts, which are inefficient radiators of sound. ''
Hardwood rims are commonly made by laminating thin, hence flexible, strips of hardwood, bending them to the desired shape immediately after the application of glue. The bent plywood system was developed by C.F. Theodore Steinway in 1880 to reduce manufacturing time and costs. Previously, the rim was constructed from several pieces of solid wood, joined and veneered, and this method continued to be used in Europe well into the 20th century. A modern exception, Bösendorfer, the Austrian manufacturer of high - quality pianos, constructs their inner rims from solid spruce, the same wood that the soundboard is made from, which is notched to allow it to bend; rather than isolating the rim from vibration, their "resonance case principle '' allows the framework to more freely resonate with the soundboard, creating additional coloration and complexity of the overall sound.
The thick wooden posts on the underside (grands) or back (uprights) of the piano stabilize the rim structure, and are made of softwood for stability. The requirement of structural strength, fulfilled by stout hardwood and thick metal, makes a piano heavy. Even a small upright can weigh 136 kg (300 lb), and the Steinway concert grand (Model D) weighs 480 kg (1,060 lb). The largest piano available on the general market, the Fazioli F308, weighs 570 kg (1,260 lb).
The pinblock, which holds the tuning pins in place, is another area where toughness is important. It is made of hardwood (typically hard maple or beech), and is laminated for strength, stability and longevity. Piano strings (also called piano wire), which must endure years of extreme tension and hard blows, are made of high carbon steel. They are manufactured to vary as little as possible in diameter, since all deviations from uniformity introduce tonal distortion. The bass strings of a piano are made of a steel core wrapped with copper wire, to increase their mass whilst retaining flexibility. If all strings throughout the piano 's compass were individual (monochord), the massive bass strings would overpower the upper ranges. Makers compensate for this with the use of double (bichord) strings in the tenor and triple (trichord) strings throughout the treble.
The plate (harp), or metal frame, of a piano is usually made of cast iron. A massive plate is advantageous. Since the strings vibrate from the plate at both ends, an insufficiently massive plate would absorb too much of the vibrational energy that should go through the bridge to the soundboard. While some manufacturers use cast steel in their plates, most prefer cast iron. Cast iron is easy to cast and machine, has flexibility sufficient for piano use, is much more resistant to deformation than steel, and is especially tolerant of compression. Plate casting is an art, since dimensions are crucial and the iron shrinks about one percent during cooling. Including an extremely large piece of metal in a piano is potentially an aesthetic handicap. Piano makers overcome this by polishing, painting, and decorating the plate. Plates often include the manufacturer 's ornamental medallion. In an effort to make pianos lighter, Alcoa worked with Winter and Company piano manufacturers to make pianos using an aluminum plate during the 1940s. Aluminum piano plates were not widely accepted, and were discontinued.
The numerous parts of a piano action are generally made from hardwood, such as maple, beech, and hornbeam, however, since World War II, makers have also incorporated plastics. Early plastics used in some pianos in the late 1940s and 1950s, proved disastrous when they lost strength after a few decades of use. Beginning in 1961, the New York branch of the Steinway firm incorporated Teflon, a synthetic material developed by DuPont, for some parts of its Permafree grand action in place of cloth bushings, but abandoned the experiment in 1982 due to excessive friction and a "clicking '' that developed over time; Teflon is "humidity stable '' whereas the wood adjacent to the Teflon swells and shrinks with humidity changes, causing problems. More recently, the Kawai firm built pianos with action parts made of more modern materials such as carbon fiber reinforced plastic, and the piano parts manufacturer Wessell, Nickel and Gross has launched a new line of carefully engineered composite parts. Thus far these parts have performed reasonably, but it will take decades to know if they equal the longevity of wood.
In all but the lowest quality pianos the soundboard is made of solid spruce (that is, spruce boards glued together along the side grain). Spruce 's high ratio of strength to weight minimizes acoustic impedance while offering strength sufficient to withstand the downward force of the strings. The best piano makers use quarter - sawn, defect - free spruce of close annular grain, carefully seasoning it over a long period before fabricating the soundboards. This is the identical material that is used in quality acoustic guitar soundboards. Cheap pianos often have plywood soundboards.
The design of the piano hammers requires having the hammer felt be soft enough so that it will not create loud, very high harmonics that a hard hammer will cause. The hammer must be lightweight enough to move swiftly when a key is pressed; yet at the same time, it must be strong enough so that it can hit strings hard when the player strikes the keys forcefully for fortissimo playing or sforzando accents.
In the early years of piano construction, keys were commonly made from sugar pine. In the 2010s, they are usually made of spruce or basswood. Spruce is typically used in high - quality pianos. Black keys were traditionally made of ebony, and the white keys were covered with strips of ivory. However, since ivory - yielding species are now endangered and protected by treaty, or are illegal in some countries, makers use plastics almost exclusively. Also, ivory tends to chip more easily than plastic. Legal ivory can still be obtained in limited quantities. The Yamaha firm invented a plastic called Ivorite that they claim mimics the look and feel of ivory. It has since been imitated by other makers.
Almost every modern piano has 52 white keys and 36 black keys for a total of 88 keys (seven octaves plus a minor third, from A to C). Many older pianos only have 85 keys (seven octaves from A to A). Some piano manufacturers have extended the range further in one or both directions. For example, the Imperial Bösendorfer has nine extra keys at the bass end, giving a total of 97 keys and an eight octave range. These extra keys are sometimes hidden under a small hinged lid that can cover the keys to prevent visual disorientation for pianists unfamiliar with the extra keys, or the colours of the extra white keys are reversed (black instead of white). More recently, manufacturer Stuart & Sons created a piano with 102 keys, going from C to F. The extra keys are the same as the other keys in appearance.
The extra keys are added primarily for increased resonance from the associated strings; that is, they vibrate sympathetically with other strings whenever the damper pedal is depressed and thus give a fuller tone. Only a very small number of works composed for piano actually use these notes.
The toy piano manufacturer Schoenhut started manufacturing both grands and uprights with only 44 or 49 keys, and shorter distance between the keyboard and the pedals. These pianos are true pianos with action and strings. The pianos were introduced to their product line in response to numerous requests in favor of it.
There is a rare variant of piano that has double keyboards called the Emánuel Moór Pianoforte. It was invented by Hungarian composer and pianist, Emánuel Moór (19 February 1863 -- 20 October 1931). It consisted of two keyboards lying one above each other. The lower keyboard has the usual 88 keys and the upper keyboard has 76 keys. When pressing the upper keyboard the internal mechanism pulls down the corresponding key on the lower keyboard, but an octave higher. This lets a pianist reach two octaves with one hand, impossible on a conventional piano. Due to its double keyboard musical work that were originally created for double - manual harpsichord such as Goldberg Variations by Bach become much easier to play, since playing on a conventional single keyboard piano involve complex and hand - tangling cross-hand movements. The design also featured a special fourth pedal that coupled the lower and upper keyboard, so when playing on the lower keyboard the note one octave higher also played. Only about 60 Emánuel Moór Pianoforte were made, mostly manufactured by Bösendorfer. Other piano manufactures such as Bechstein, Chickering, and Steinway & Sons had also manufactured a few.
Pianos have been built with alternative keyboard systems, e.g., the Jankó keyboard.
Pianos have had pedals, or some close equivalent, since the earliest days. (In the 18th century, some pianos used levers pressed upward by the player 's knee instead of pedals.) Most grand pianos in the US have three pedals: the soft pedal (una corda), sostenuto, and sustain pedal (from left to right, respectively), while in Europe, the standard is two pedals: the soft pedal and the sustain pedal. Most modern upright pianos also have three pedals: soft pedal, practice pedal and sustain pedal, though older or cheaper models may lack the practice pedal. In Europe the standard for upright pianos is two pedals: the soft and the sustain pedals.
The sustain pedal (or, damper pedal) is often simply called "the pedal '', since it is the most frequently used. It is placed as the rightmost pedal in the group. It lifts the dampers from all keys, sustaining all played notes. In addition, it alters the overall tone by allowing all strings, including those not directly played, to reverberate. When all of the other strings on the piano can vibrate, this allows sympathetic vibration of strings that are harmonically related to the sounded pitches. For example, if the pianist plays the 440 Hz "A '' note, the higher octave "A '' notes will also sound sympathetically.
The soft pedal or una corda pedal is placed leftmost in the row of pedals. In grand pianos it shifts the entire action / keyboard assembly to the right (a very few instruments have shifted left) so that the hammers hit two of the three strings for each note. In the earliest pianos whose unisons were bichords rather than trichords, the action shifted so that hammers hit a single string, hence the name una corda, or ' one string '. The effect is to soften the note as well as change the tone. In uprights this action is not possible; instead the pedal moves the hammers closer to the strings, allowing the hammers to strike with less kinetic energy. This produces a slightly softer sound, but no change in timbre.
On grand pianos, the middle pedal is a sostenuto pedal. This pedal keeps raised any damper already raised at the moment the pedal is depressed. This makes it possible to sustain selected notes (by depressing the sostenuto pedal before those notes are released) while the player 's hands are free to play additional notes (which are not sustained). This can be useful for musical passages with low bass pedal points, in which a bass note is sustained while a series of chords changes over top of it, and other otherwise tricky parts. On many upright pianos, the middle pedal is called the "practice '' or celeste pedal. This drops a piece of felt between the hammers and strings, greatly muting the sounds. This pedal can be shifted while depressed, into a "locking '' position.
There are also non-standard variants. On some pianos (grands and verticals), the middle pedal can be a bass sustain pedal: that is, when it is depressed, the dampers lift off the strings only in the bass section. Players use this pedal to sustain a single bass note or chord over many measures, while playing the melody in the treble section. On the Stuart and Sons piano as well as the largest Fazioli piano, there is a fourth pedal to the left of the principal three. This fourth pedal works in the same way as the soft pedal of an upright piano, moving the hammers closer to the strings.
The rare transposing piano (an example of which was owned by Irving Berlin) has a middle pedal that functions as a clutch that disengages the keyboard from the mechanism, so the player can move the keyboard to the left or right with a lever. This shifts the entire piano action so the pianist can play music written in one key so that it sounds in a different key. Some piano companies have included extra pedals other than the standard two or three. Crown and Schubert Piano Co. produced a four - pedal piano. Fazioli currently offers a fourth pedal that provides a second soft pedal, that works by bringing the keys closer to the strings.
Wing and Son of New York offered a five - pedal piano from approximately 1893 through the 1920s. There is no mention of the company past the 1930s. Labeled left to right, the pedals are Mandolin, Orchestra, Expression, Soft, and Forte (Sustain). The Orchestral pedal produced a sound similar to a tremolo feel by bouncing a set of small beads dangling against the strings, enabling the piano to mimic a mandolin, guitar, banjo, zither and harp, thus the name Orchestral. The Mandolin pedal used a similar approach, lowering a set of felt strips with metal rings in between the hammers and the strings (aka rinky - tink effect). This extended the life of the hammers when the Orch pedal was used, a good idea for practicing, and created an echo - like sound that mimicked playing in an orchestral hall.
The pedalier piano, or pedal piano, is a rare type of piano that includes a pedalboard so players can user their feet to play bass register notes, as on an organ. There are two types of pedal piano. On one, the pedal board is an integral part of the instrument, using the same strings and mechanism as the manual keyboard. The other, rarer type, consists of two independent pianos (each with separate mechanics and strings) placed one above the other -- one for the hands and one for the feet. This was developed primarily as a practice instrument for organists, though there is a small repertoire written specifically for the instrument.
When the key is struck, a chain reaction occurs to produce the sound. First, the key raises the "wippen '' mechanism, which forces the jack against the hammer roller (or knuckle). The hammer roller then lifts the lever carrying the hammer. The key also raises the damper; and immediately after the hammer strikes the wire it falls back, allowing the wire to resonate and thus produce sound. When the key is released the damper falls back onto the strings, stopping the wire from vibrating, and thus stopping the sound. The vibrating piano strings themselves are not very loud, but their vibrations are transmitted to a large soundboard that moves air and thus converts the energy to sound. The irregular shape and off - center placement of the bridge ensure that the soundboard vibrates strongly at all frequencies. (See Piano action for a diagram and detailed description of piano parts.) The piano hammer is "thrown '' against the strings. This means that once a pianist has pressed or struck a key, and the hammer is set in motion towards the strings, the pressure on the key no longer leads to the player controlling the hammer. Of course, the damper keeps the note sounding until the key is released (or the sustain pedal).
There are three factors that influence the pitch of a vibrating wire.
A vibrating wire subdivides itself into many parts vibrating at the same time. Each part produces a pitch of its own, called a partial. A vibrating string has one fundamental and a series of partials. The most pure combination of two pitches is when one is double the frequency of the other.
For a repeating wave, the velocity v equals the wavelength λ times the frequency f,
On the piano string, waves reflect from both ends. The superposition of reflecting waves results in a standing wave pattern, but only for wavelengths λ = 2L, L, 2L / 3, L / 2,... = 2L / n, where L is the length of the string. Therefore, the only frequencies produced on a single string are f = nv / 2L. Timbre is largely determined by the content of these harmonics. Different instruments have different harmonic content for the same pitch. A real string vibrates at harmonics that are not perfect multiples of the fundamental. This results in a little inharmonicity, which gives richness to the tone but causes significant tuning challenges throughout the compass of the instrument.
Striking the piano key with greater velocity increases the amplitude of the waves and therefore the volume. From pianissimo (pp) to fortissimo (ff) the hammer velocity changes by almost a factor of a hundred. The hammer contact time with the string shortens from 4 milliseconds at pp to less than 2 ms at ff. If two wires adjusted to the same pitch are struck at the same time, the sound produced by one reinforces the other, and a louder combined sound of shorter duration is produced. If one wire vibrates out of synchronization with the other, they subtract from each other and produce a softer tone of longer duration.
Pianos are heavy and powerful, yet delicate instruments. Over the years, professional piano movers have developed special techniques for transporting both grands and uprights, which prevent damage to the case and to the piano 's mechanical elements. Pianos need regular tuning to keep them on correct pitch. The hammers of pianos are voiced to compensate for gradual hardening of the felt, and other parts also need periodic regulation. Pianos need regular maintenance to ensure the felt hammers and key mechanisms are functioning properly. Aged and worn pianos can be rebuilt or reconditioned by piano rebuilders. Strings eventually need to be replaced. Often, by replacing a great number of their parts, and adjusting them, old instruments can perform as well as new pianos.
Piano tuning involves adjusting the tensions of the piano 's strings with a specialized wrench, thereby aligning the intervals among their tones so that the instrument is in tune. While guitar and violin players tune their own instruments, pianists usually hire a piano tuner, a specialized technician, to tune their pianos. The piano tuner uses special tools. The meaning of the term in tune in the context of piano tuning is not simply a particular fixed set of pitches. Fine piano tuning carefully assesses the interaction among all notes of the chromatic scale, different for every piano, and thus requires slightly different pitches from any theoretical standard. Pianos are usually tuned to a modified version of the system called equal temperament (see Piano key frequencies for the theoretical piano tuning). In all systems of tuning, each pitch is derived from its relationship to a chosen fixed pitch, usually the internationally recognized standard concert pitch of A (the A above middle C). The term A440 refers to a widely accepted frequency of this pitch -- 440 Hz.
The relationship between two pitches, called an interval, is the ratio of their absolute frequencies. Two different intervals are perceived as the same when the pairs of pitches involved share the same frequency ratio. The easiest intervals to identify, and the easiest intervals to tune, are those that are just, meaning they have a simple whole - number ratio. The term temperament refers to a tuning system that tempers the just intervals (usually the perfect fifth, which has the ratio 3: 2) to satisfy another mathematical property; in equal temperament, a fifth is tempered by narrowing it slightly, achieved by flattening its upper pitch slightly, or raising its lower pitch slightly. A temperament system is also known as a set of "bearings ''. Tempering an interval causes it to beat, which is a fluctuation in perceived sound intensity due to interference between close (but unequal) pitches. The rate of beating is equal to the frequency differences of any harmonics that are present for both pitches and that coincide or nearly coincide. Piano tuners have to use their ear to "stretch '' the tuning of a piano to make it sound in tune. This involves tuning the highest - pitched strings slightly higher and the lowest - pitched strings slightly lower than what a mathematical frequency table (in which octaves are derived by doubling the frequency) would suggest.
As with any other musical instrument, the piano may be played from written music, by ear, or through improvisation. Piano technique evolved during the transition from harpsichord and clavichord to fortepiano playing, and continued through the development of the modern piano. Changes in musical styles and audience preferences over the 19th and 20th century, as well as the emergence of virtuoso performers, contributed to this evolution and to the growth of distinct approaches or schools of piano playing. Although technique is often viewed as only the physical execution of a musical idea, many pedagogues and performers stress the interrelatedness of the physical and mental or emotional aspects of piano playing. Well - known approaches to piano technique include those by Dorothy Taubman, Edna Golandsky, Fred Karpoff, Charles - Louis Hanon and Otto Ortmann.
Many classical music composers, including Haydn, Mozart, and Beethoven, composed for the fortepiano, a rather different instrument than the modern piano. The fortepiano was a quieter instrument with a narrower dynamic range and a smaller span of octaves. Even composers of the Romantic movement, like Liszt, Chopin, Robert Schumann, Felix Mendelssohn, and Johannes Brahms, wrote for pianos substantially different from 2010 - era modern pianos. Contemporary musicians may adjust their interpretation of historical compositions from the 1600s to the 1800s to account for sound quality differences between old and new instruments or to changing performance practice.
Starting in Beethoven 's later career, the fortepiano evolved into an instrument more like the modern piano of the 2000s. Modern pianos were in wide use by the late 19th century. They featured an octave range larger than the earlier fortepiano instrument, adding around 30 more keys to the instrument, which extended the deep bass range and the high treble range. Factory mass production of upright pianos made them more affordable for a larger number of middle - class people. They appeared in music halls and pubs during the 19th century, providing entertainment through a piano soloist, or in combination with a small dance band. Just as harpsichordists had accompanied singers or dancers performing on stage, or playing for dances, pianists took up this role in the late 1700s and in the following centuries.
During the 19th century, American musicians playing for working - class audiences in small pubs and bars, particularly African - American composers, developed new musical genres based on the modern piano. Ragtime music, popularized by composers such as Scott Joplin, reached a broader audience by 1900. The popularity of ragtime music was quickly succeeded by Jazz piano. New techniques and rhythms were invented for the piano, including ostinato for boogie - woogie, and Shearing voicing. George Gershwin 's Rhapsody in Blue broke new musical ground by combining American jazz piano with symphonic sounds. Comping, a technique for accompanying jazz vocalists on piano, was exemplified by Duke Ellington 's technique. Honky - tonk music, featuring yet another style of piano rhythm, became popular during the same era. Bebop techniques grew out of jazz, with leading composer - pianists such as Thelonious Monk and Bud Powell. In the late 20th century, Bill Evans composed pieces combining classical techniques with his jazz experimentation. In the 1970s, Herbie Hancock was one of the first jazz composer - pianists to find mainstream popularity working with newer urban music techniques such as jazz - funk and jazz - rock.
Pianos have also been used prominently in rock and roll and rock music by entertainers such as Jerry Lee Lewis, Little Richard, Keith Emerson (Emerson, Lake & Palmer), Elton John, Ben Folds, Billy Joel, Nicky Hopkins, and Tori Amos, to name a few. Modernist styles of music have also appealed to composers writing for the modern grand piano, including John Cage and Philip Glass.
The piano is a crucial instrument in Western classical music, jazz, blues, rock, folk music, and many other Western musical genres. A large number of composers and songwriters are proficient pianists because the piano keyboard offers an effective means of experimenting with complex melodic and harmonic interplay and trying out multiple, independent melody lines that are played at the same time. Pianos are used in film and television scoring, as the large range permits composers to try out melodies and basslines, even if the music will be orchestrated for other instruments. Bandleaders often learn the piano, as it is an excellent instrument upon which to learn new pieces and songs which one will be leading during a performance. The piano is an essential tool in music education in elementary and secondary schools and universities and colleges. Most music classrooms and practice rooms have a piano. Pianos are used to help teach music theory, music history and music appreciation classes. Many conductors are trained in piano, because it allows them to play parts of the symphonies they are conducting (using a piano reduction or doing a reduction from the full score), so that they can develop their interpretation.
|
supreme court cases that deal with the fifth amendment | List of United States Supreme Court cases involving constitutional Criminal Procedure - wikipedia
The United States Constitution contains several provisions regarding criminal procedure, including: Article Three, along with Amendments Five, Six, Eight, and Fourteen. Such cases have come to comprise a substantial portion of the Supreme Court 's docket.
See # Jury Clauses
Concerning only incrimination that occurs in the courtroom
See # Criminal due process
' Omar Magatheh v. Noran Magatheh, 129 S. Ct. 1283 (2009)
Also the Fifth Amendment
|
when do charges move in an electrical system | Electric charge - wikipedia
Electric charge is the physical property of matter that causes it to experience a force when placed in an electromagnetic field. There are two types of electric charges: positive and negative (commonly carried by protons and electrons respectively). Like charges repel and unlike attract. An absence of net charge is referred to as neutral. An object is negatively charged if it has an excess of electrons, and is otherwise positively charged or uncharged. The SI derived unit of electric charge is the coulomb (C). In electrical engineering, it is also common to use the ampere - hour (Ah), and, in chemistry, it is common to use the elementary charge (e) as a unit. The symbol Q often denotes charge. Early knowledge of how charged substances interact is now called classical electrodynamics, and is still accurate for problems that do n't require consideration of quantum effects.
The electric charge is a fundamental conserved property of some subatomic particles, which determines their electromagnetic interaction. Electrically charged matter is influenced by, and produces, electromagnetic fields. The interaction between a moving charge and an electromagnetic field is the source of the electromagnetic force, which is one of the four fundamental forces (See also: magnetic field).
Twentieth - century experiments demonstrated that electric charge is quantized; that is, it comes in integer multiples of individual small units called the elementary charge, e, approximately equal to 6981160200000000000 ♠ 1.602 × 10 coulombs (except for particles called quarks, which have charges that are integer multiples of 1 / 3e). The proton has a charge of + e, and the electron has a charge of − e. The study of charged particles, and how their interactions are mediated by photons, is called quantum electrodynamics.
Charge is the fundamental property of forms of matter that exhibit electrostatic attraction or repulsion in the presence of other matter. Electric charge is a characteristic property of many subatomic particles. The charges of free - standing particles are integer multiples of the elementary charge e; we say that electric charge is quantized. Michael Faraday, in his electrolysis experiments, was the first to note the discrete nature of electric charge. Robert Millikan 's oil drop experiment demonstrated this fact directly, and measured the elementary charge.
By convention, the charge of an electron is − 1, while that of a proton is + 1. Charged particles whose charges have the same sign repel one another, and particles whose charges have different signs attract. Coulomb 's law quantifies the electrostatic force between two particles by asserting that the force is proportional to the product of their charges, and inversely proportional to the square of the distance between them.
The charge of an antiparticle equals that of the corresponding particle, but with opposite sign. Quarks have fractional charges of either − 1 / 3 or + 2 / 3, but free - standing quarks have never been observed (the theoretical reason for this fact is asymptotic freedom).
The electric charge of a macroscopic object is the sum of the electric charges of the particles that make it up. This charge is often small, because matter is made of atoms, and atoms typically have equal numbers of protons and electrons, in which case their charges cancel out, yielding a net charge of zero, thus making the atom neutral.
An ion is an atom (or group of atoms) that has lost one or more electrons, giving it a net positive charge (cation), or that has gained one or more electrons, giving it a net negative charge (anion). Monatomic ions are formed from single atoms, while polyatomic ions are formed from two or more atoms that have been bonded together, in each case yielding an ion with a positive or negative net charge.
During formation of macroscopic objects, constituent atoms and ions usually combine to form structures composed of neutral ionic compounds electrically bound to neutral atoms. Thus macroscopic objects tend toward being neutral overall, but macroscopic objects are rarely perfectly net neutral.
Sometimes macroscopic objects contain ions distributed throughout the material, rigidly bound in place, giving an overall net positive or negative charge to the object. Also, macroscopic objects made of conductive elements, can more or less easily (depending on the element) take on or give off electrons, and then maintain a net negative or positive charge indefinitely. When the net electric charge of an object is non-zero and motionless, the phenomenon is known as static electricity. This can easily be produced by rubbing two dissimilar materials together, such as rubbing amber with fur or glass with silk. In this way non-conductive materials can be charged to a significant degree, either positively or negatively. Charge taken from one material is moved to the other material, leaving an opposite charge of the same magnitude behind. The law of conservation of charge always applies, giving the object from which a negative charge is taken a positive charge of the same magnitude, and vice versa.
Even when an object 's net charge is zero, charge can be distributed non-uniformly in the object (e.g., due to an external electromagnetic field, or bound polar molecules). In such cases the object is said to be polarized. The charge due to polarization is known as bound charge, while charge on an object produced by electrons gained or lost from outside the object is called free charge. The motion of electrons in conductive metals in a specific direction is known as electric current.
The SI unit of quantity of electric charge is the coulomb, which is equivalent to about 7018624200000000000 ♠ 6.242 × 10 e (e is the charge of a proton). Hence, the charge of an electron is approximately 3018839800000000000 ♠ − 1.602 × 10 C. The coulomb is defined as the quantity of charge that has passed through the cross section of an electrical conductor carrying one ampere within one second. The symbol Q is often used to denote a quantity of electricity or charge. The quantity of electric charge can be directly measured with an electrometer, or indirectly measured with a ballistic galvanometer.
After finding the quantized character of charge, in 1891 George Stoney proposed the unit ' electron ' for this fundamental unit of electrical charge. This was before the discovery of the particle by J.J. Thomson in 1897. The unit is today treated as nameless, referred to as "elementary charge '', "fundamental unit of charge '', or simply as "e ''. A measure of charge should be a multiple of the elementary charge e, even if at large scales charge seems to behave as a real quantity. In some contexts it is meaningful to speak of fractions of a charge; for example in the charging of a capacitor, or in the fractional quantum Hall effect.
The unit faraday is sometimes used in electrochemistry. One faraday of charge is the magnitude of the charge of one mole of electrons, i.e. 96485.33289 (59) C.
In systems of units other than SI such as cgs, electric charge is expressed as combination of only three fundamental quantities (length, mass, and time), and not four, as in SI, where electric charge is a combination of length, mass, time, and electric current.
As reported by the ancient Greek mathematician Thales of Miletus around 600 BC, charge (or electricity) could be accumulated by rubbing fur on various substances, such as amber. The Greeks observed that the charged amber buttons could attract light objects such as hair. They also found that if they rubbed the amber for long enough, they could even get an electric spark to jump. This property derives from the triboelectric effect.
In 1600, the English scientist William Gilbert returned to the subject in De Magnete, and coined the New Latin word electricus from ἤλεκτρον (ēlektron), the Greek word for amber, which soon gave rise to the English words "electric '' and "electricity ''. He was followed in 1660 by Otto von Guericke, who invented what was probably the first electrostatic generator. Other European pioneers were Robert Boyle, who in 1675 stated that electric attraction and repulsion can act across a vacuum; Stephen Gray, who in 1729 classified materials as conductors and insulators; and C.F. du Fay, who proposed in 1733 that electricity comes in two varieties that cancel each other, and expressed this in terms of a two - fluid theory. When glass was rubbed with silk, du Fay said that the glass was charged with vitreous electricity, and, when amber was rubbed with fur, the amber was charged with resinous electricity. In 1839, Michael Faraday showed that the apparent division between static electricity, current electricity, and bioelectricity was incorrect, and all were a consequence of the behavior of a single kind of electricity appearing in opposite polarities. It is arbitrary which polarity is called positive and which is called negative. Positive charge can be defined as the charge left on a glass rod after being rubbed with silk.
One of the foremost experts on electricity in the 18th century was Benjamin Franklin, who argued in favour of a one - fluid theory of electricity. Franklin imagined electricity as being a type of invisible fluid present in all matter; for example, he believed that it was the glass in a Leyden jar that held the accumulated charge. He posited that rubbing insulating surfaces together caused this fluid to change location, and that a flow of this fluid constitutes an electric current. He also posited that when matter contained too little of the fluid it was "negatively '' charged, and when it had an excess it was "positively '' charged. For a reason that was not recorded, he identified the term "positive '' with vitreous electricity and "negative '' with resinous electricity. William Watson independently arrived at the same explanation at about the same time (1746).
Static electricity and electric current are two separate phenomena. They both involve electric charge, and may occur simultaneously in the same object. Static electricity refers to the electric charge of an object and the related electrostatic discharge when two objects are brought together that are not at equilibrium. An electrostatic discharge creates a change in the charge of each of the two objects. In contrast, electric current is the flow of electric charge through an object, which produces no net loss or gain of electric charge.
When a piece of glass and a piece of resin -- neither of which exhibit any electrical properties -- are rubbed together and left with the rubbed surfaces in contact, they still exhibit no electrical properties. When separated, they attract each other.
A second piece of glass rubbed with a second piece of resin, then separated and suspended near the former pieces of glass and resin causes these phenomena:
This attraction and repulsion is an electrical phenomena, and the bodies that exhibit them are said to be electrified, or electrically charged. Bodies may be electrified in many other ways, as well as by friction. The electrical properties of the two pieces of glass are similar to each other but opposite to those of the two pieces of resin: The glass attracts what the resin repels and repels what the resin attracts.
If a body electrified in any manner whatsoever behaves as the glass does, that is, if it repels the glass and attracts the resin, the body is said to be vitreously electrified, and if it attracts the glass and repels the resin it is said to be resinously electrified. All electrified bodies are either vitreously or resinously electrified.
An established convention in the scientific community defines vitreous electrification as positive, and resinous electrification as negative. The exactly opposite properties of the two kinds of electrification justify our indicating them by opposite signs, but the application of the positive sign to one rather than to the other kind must be considered as a matter of arbitrary convention -- just as it is a matter of convention in mathematical diagram to reckon positive distances towards the right hand.
No force, either of attraction or of repulsion, can be observed between an electrified body and a body not electrified.
Actually, all bodies are electrified, but may appear not electrified because of the relatively similar charge of neighboring objects in the environment. An object further electrified + or -- creates an equivalent or opposite charge by default in neighboring objects, until those charges can equalize. The effects of attraction can be observed in high - voltage experiments, while lower voltage effects are merely weaker and therefore less obvious. The attraction and repulsion forces are codified by Coulomb 's law (attraction falls off at the square of the distance, which has a corollary for acceleration in a gravitational field, suggesting that gravitation may be merely electrostatic phenomenon between relatively weak charges in terms of scale). See also Casimir effect.
It is now known that the Franklin - Watson model was fundamentally correct. There is only one kind of electrical charge, and only one variable is required to keep track of the amount of charge. On the other hand, just knowing the charge is not a complete description of the situation. Matter is composed of several kinds of electrically charged particles, and these particles have many properties, not just charge.
The most common charge carriers are the positively charged proton and the negatively charged electron. The movement of any of these charged particles constitutes an electric current. In many situations, it suffices to speak of the conventional current without regard to whether it is carried by positive charges moving in the direction of the conventional current or by negative charges moving in the opposite direction. This macroscopic viewpoint is an approximation that simplifies electromagnetic concepts and calculations.
At the opposite extreme, if one looks at the microscopic situation, one sees there are many ways of carrying an electric current, including: a flow of electrons; a flow of electron "holes '' that act like positive particles; and both negative and positive particles (ions or other charged particles) flowing in opposite directions in an electrolytic solution or a plasma.
Beware that, in the common and important case of metallic wires, the direction of the conventional current is opposite to the drift velocity of the actual charge carriers; i.e., the electrons. This is a source of confusion for beginners.
The total electric charge of an isolated system remains constant regardless of changes within the system itself. This law is inherent to all processes known to physics and can be derived in a local form from gauge invariance of the wave function. The conservation of charge results in the charge - current continuity equation. More generally, the net change in charge density ρ within a volume of integration V is equal to the area integral over the current density J through the closed surface S = ∂ V, which is in turn equal to the net current I:
Thus, the conservation of electric charge, as expressed by the continuity equation, gives the result:
The charge transferred between times t i (\ displaystyle t_ (\ mathrm (i))) and t f (\ displaystyle t_ (\ mathrm (f))) is obtained by integrating both sides:
where I is the net outward current through a closed surface and Q is the electric charge contained within the volume defined by the surface.
Aside from the properties described in articles about electromagnetism, charge is a relativistic invariant. This means that any particle that has charge Q, no matter how fast it goes, always has charge Q. This property has been experimentally verified by showing that the charge of one helium nucleus (two protons and two neutrons bound together in a nucleus and moving around at high speeds) is the same as two deuterium nuclei (one proton and one neutron bound together, but moving much more slowly than they would if they were in a helium nucleus).
|
pirates of the caribbean dead man's chest psp gameplay | Pirates of the Caribbean: Dead Man 's Chest (video game) - Wikipedia
Pirates of the Caribbean: Dead Man 's Chest is an action - adventure game based on the film of the same name developed by Griptonite Games and Amaze Entertainment for the Game Boy Advance, Nintendo DS and PSP.
The game incorporates role playing elements where Jack Sparrow and the Black Pearl can be customized. Dead Man 's Chest is played on land and sea, on land the player must defeat enemies, search for treasure or for comrades, items, rumors and boat enhancements can be bought from towns. When on sea the player must travel from one island to another to play through the story or to explore the world. Sea battles can take place when the Black Pearl is steered towards other boats, during these battles the cannons are fired to damage the opposing ship and once when it has been sufficiently damaged, it 's possible to board the burning ship and plunder it for food, grog and even treasures. The GBA version 's gameplay is similar to the Castlevania game engine.
The game was met with average to very mixed reception upon release. GameRankings and Metacritic gave it a score of 74.30 % and 70 out of 100 for the Game Boy Advance version; 63.54 % and 63 out of 100 for the DS version; and 52.71 % and 52 out of 100 for the PSP version.
|
when did the first 45 record come out | Phonograph record - Wikipedia
A phonograph record (also known as a gramophone record, especially in British English, or record) is an analog sound storage medium in the form of a flat disc with an inscribed, modulated spiral groove. The groove usually starts near the periphery and ends near the center of the disc. At first, the discs were commonly made from shellac; starting in the 1950s polyvinyl chloride became common. In recent decades, records have sometimes been called vinyl records, or simply vinyl.
The phonograph disc record was the primary medium used for music reproduction throughout the 20th century. It had co-existed with the phonograph cylinder from the late 1880s and had effectively superseded it by around 1912. Records retained the largest market share even when new formats such as the compact cassette were mass - marketed. By the 1980s, digital media, in the form of the compact disc, had gained a larger market share, and the vinyl record left the mainstream in 1991. From the 1990s to the 2010s, records continued to be manufactured and sold on a much smaller scale, and were especially used by disc jockeys (DJs) and released by artists in mostly dance music genres, and listened to by a niche market of audiophiles. The phonograph record has made a notable niche resurgence in the early 21st century -- 9.2 million records were sold in the U.S. in 2014, a 260 % increase since 2009. Likewise, in the UK sales have increased five-fold from 2009 to 2014.
As of 2017, 48 record pressing facilities remain worldwide, 18 in the United States and 30 in other countries. The increased popularity of vinyl has led to the investment in new and modern record - pressing machines. Only two producers of lacquers remain: Apollo Masters in California, and MDC in Japan.
Phonograph records are generally described by their diameter in inches (12 - inch, 10 - inch, 7 - inch), the rotational speed in revolutions per minute (rpm) at which they are played ( 8 ⁄, 16 ⁄, 33 ⁄, 45, 78), and their time capacity, determined by their diameter and speed (LP (long playing), 12 - inch disc, 33 ⁄ rpm; SP (single), 10 - inch disc, 78 rpm, or 7 - inch disc, 45 rpm; EP (extended play), 12 - inch disc, 33 ⁄ or 45 rpm); their reproductive quality, or level of fidelity (high - fidelity, orthophonic, full - range, etc.); and the number of audio channels (mono, stereo, quad, etc.).
Vinyl records may be scratched or warped if stored incorrectly but if they are not exposed to high heat, carelessly handled or broken, a vinyl record has the potential to last for centuries.
The large cover (and inner sleeves) are valued by collectors and artists for the space given for visual expression, especially when it comes to the long play vinyl LP.
The phonautograph, patented by Léon Scott in 1857, used a vibrating diaphragm and stylus to graphically record sound waves as tracings on sheets of paper, purely for visual analysis and without any intent of playing them back. In the 2000s, these tracings were first scanned by audio engineers and digitally converted into audible sound. Phonautograms of singing and speech made by Scott in 1860 were played back as sound for the first time in 2008. Along with a tuning fork tone and unintelligible snippets recorded as early as 1857, these are the earliest known recordings of sound.
In 1877, Thomas Edison invented the phonograph. Unlike the phonautograph, it could both record and reproduce sound. Despite the similarity of name, there is no documentary evidence that Edison 's phonograph was based on Scott 's phonautograph. Edison first tried recording sound on a wax - impregnated paper tape, with the idea of creating a "telephone repeater '' analogous to the telegraph repeater he had been working on. Although the visible results made him confident that sound could be physically recorded and reproduced, his notes do not indicate that he actually reproduced sound before his first experiment in which he used tinfoil as a recording medium several months later. The tinfoil was wrapped around a grooved metal cylinder and a sound - vibrated stylus indented the tinfoil while the cylinder was rotated. The recording could be played back immediately. The Scientific American article that introduced the tinfoil phonograph to the public mentioned Marey, Rosapelly and Barlow as well as Scott as creators of devices for recording but, importantly, not reproducing sound. Edison also invented variations of the phonograph that used tape and disc formats. Numerous applications for the phonograph were envisioned, but although it enjoyed a brief vogue as a startling novelty at public demonstrations, the tinfoil phonograph proved too crude to be put to any practical use. A decade later, Edison developed a greatly improved phonograph that used a hollow wax cylinder instead of a foil sheet. This proved to be both a better - sounding and far more useful and durable device. The wax phonograph cylinder created the recorded sound market at the end of the 1880s and dominated it through the early years of the 20th century.
Lateral - cut disc records were developed in the United States by Emile Berliner, who named his system the "gramophone '', distinguishing it from Edison 's wax cylinder "phonograph '' and American Graphophone 's wax cylinder "graphophone ''. Berliner 's earliest discs, first marketed in 1889, only in Europe, were 12.5 cm (approx 5 inches) in diameter, and were played with a small hand - propelled machine. Both the records and the machine were adequate only for use as a toy or curiosity, due to the limited sound quality. In the United States in 1894, under the Berliner Gramophone trademark, Berliner started marketing records of 7 inches diameter with somewhat more substantial entertainment value, along with somewhat more substantial gramophones to play them. Berliner 's records had poor sound quality compared to wax cylinders, but his manufacturing associate Eldridge R. Johnson eventually improved it. Abandoning Berliner 's "Gramophone '' trademark for legal reasons, in 1901 Johnson 's and Berliner 's separate companies reorganized to form the Victor Talking Machine Company in Camden, New Jersey, whose products would come to dominate the market for many years. Emile Berliner moved his company to Montreal in 1900. The factory, which became the Canadian branch of RCA Victor still exists. There is a dedicated museum in Montreal for Berliner (Musée des ondes Emile Berliner).
In 1901, 10 - inch disc records were introduced, followed in 1903 by 12 - inch records. These could play for more than three and four minutes, respectively, whereas contemporary cylinders could only play for about two minutes. In an attempt to head off the disc advantage, Edison introduced the Amberol cylinder in 1909, with a maximum playing time of 4 ⁄ minutes (at 160 rpm), which in turn were superseded by Blue Amberol Records, which had a playing surface made of celluloid, a plastic, which was far less fragile. Despite these improvements, during the 1910s discs decisively won this early format war, although Edison continued to produce new Blue Amberol cylinders for an ever - dwindling customer base until late in 1929. By 1919, the basic patents for the manufacture of lateral - cut disc records had expired, opening the field for countless companies to produce them. Analog disc records dominated the home entertainment market until they were outsold by digital compact discs in the late 1980s (which were in turn supplanted by digital audio recordings distributed via online music stores and Internet file sharing).
Early disc recordings were produced in a variety of speeds ranging from 60 to 130 rpm, and a variety of sizes. As early as 1894, Emile Berliner 's United States Gramophone Company was selling single - sided 7 - inch discs with an advertised standard speed of "about 70 rpm ''.
One standard audio recording handbook describes speed regulators, or governors, as being part of a wave of improvement introduced rapidly after 1897. A picture of a hand - cranked 1898 Berliner Gramophone shows a governor. It says that spring drives replaced hand drives. It notes that:
The speed regulator was furnished with an indicator that showed the speed when the machine was running so that the records, on reproduction, could be revolved at exactly the same speed... The literature does not disclose why 78 rpm was chosen for the phonograph industry, apparently this just happened to be the speed created by one of the early machines and, for no other reason continued to be used.
By 1925, the speed of the record was becoming standardized at a nominal value of 78 rpm. However, the standard differed between places with alternating current electricity supply at 60 hertz (cycles per second, Hz) and those at 50 Hz. Where the mains supply was 60 Hz, the actual speed was 78.26 rpm: that of a 60 Hz stroboscope illuminating 92 - bar calibration markings. Where it was 50 Hz, it was 77.92 rpm: that of a 50 Hz stroboscope illuminating 77 - bar calibration markings.
Early recordings were made entirely acoustically, the sound being collected by a horn and piped to a diaphragm, which vibrated the cutting stylus. Sensitivity and frequency range were poor, and frequency response was very irregular, giving acoustic recordings an instantly recognizable tonal quality. A singer practically had to put his or her face in the recording horn. Lower - pitched orchestral instruments such as cellos and double basses were often doubled (or replaced) by louder instruments, such as tubas. Standard violins in orchestral ensembles were commonly replaced by Stroh violins, which became popular with recording studios.
Even drums, if planned and placed properly, could be effectively recorded and heard on even the earliest jazz and military band recordings. The loudest instruments such as the drums and trumpets were positioned the farthest away from the collecting horn. Lillian Hardin Armstrong, a member of King Oliver 's Creole Jazz Band, which recorded at Gennett Records in 1923, remembered that at first Oliver and his young second trumpet, Louis Armstrong, stood next to each other and Oliver 's horn could not be heard. "They put Louis about fifteen feet over in the corner, looking all sad. ''
During the first half of the 1920s, engineers at Western Electric, as well as independent inventors such as Orlando Marsh, developed technology for capturing sound with a microphone, amplifying it with vacuum tubes, then using the amplified signal to drive an electromechanical recording head. Western Electric 's innovations resulted in a broader and smoother frequency response, which produced a dramatically fuller, clearer and more natural - sounding recording. Soft or distant sounds that were previously impossible to record could now be captured. Volume was now limited only by the groove spacing on the record and the amplification of the playback device. Victor and Columbia licensed the new electrical system from Western Electric and began issuing discs during the Spring of 1925. The first electrically made classical recording was Chopin 's "Impromptus '' and Schubert 's "Litanei '' performed by Alfred Cortot for Victor.
A 1926 Wanamaker 's ad in The New York Times offers records "by the latest Victor process of electrical recording ''. It was recognized as a breakthrough; in 1930, a Times music critic stated:
... the time has come for serious musical criticism to take account of performances of great music reproduced by means of the records. To claim that the records have succeeded in exact and complete reproduction of all details of symphonic or operatic performances... would be extravagant... (but) the article of today is so far in advance of the old machines as hardly to admit classification under the same name. Electrical recording and reproduction have combined to retain vitality and color in recitals by proxy.
Electrically amplified record players were initially expensive and slow to be adopted. In 1925, the Victor company introduced both the Orthophonic Victrola, an acoustical record player that was designed to play electrically recorded discs, and the electrically amplified Electrola. The acoustical Orthophonics were priced from US $ 95 to $300, depending on cabinetry. However the cheapest Electrola cost $650, in an era when the price of a new Ford Model T was less than $300 and clerical jobs paid around $20 a week.
The Orthophonic had an interior folded exponential horn, a sophisticated design informed by impedance - matching and transmission - line theory, and designed to provide a relatively flat frequency response. Its first public demonstration was front - page news in The New York Times, which reported:
The audience broke into applause... John Philip Sousa (said): ' (Gentlemen), that is a band. This is the first time I have ever heard music with any soul to it produced by a mechanical talking machine '... The new instrument is a feat of mathematics and physics. It is not the result of innumerable experiments, but was worked out on paper in advance of being built in the laboratory... The new machine has a range of from 100 to 5,000 (cycles), or five and a half octaves... The ' phonograph tone ' is eliminated by the new recording and reproducing process.
Gradually, electrical reproduction entered the home. The spring motor was replaced by an electric motor. The old sound box with its needle - linked diaphragm was replaced by an electromagnetic pickup that converted the needle vibrations into an electrical signal. The tone arm now served to conduct a pair of wires, not sound waves, into the cabinet. The exponential horn was replaced by an amplifier and a loudspeaker.
Sales of records declined precipitously during the Great Depression of the 1930s. RCA, which purchased the Victor Talking Machine Company in 1929, introduced an inexpensive turntable called the Duo Jr., which was designed to be connected to their radio sets. According to Edward Wallerstein (the general manager of RCA 's Victor division), this device was "instrumental in revitalizing the industry ''.
The earliest disc records (1889 -- 1894) were made of variety of materials including hard rubber. Around 1895, a shellac - based material was introduced and became standard. Formulas for the mixture varied by manufacturer over time, but it was typically about one - third shellac and two - thirds mineral filler (finely pulverized slate or limestone), with cotton fibers to add tensile strength, carbon black for color (without which it tended to be an unattractive "dirty '' gray or brown color), and a very small amount of a lubricant to facilitate release from the manufacturing press. Columbia Records used a laminated disc with a core of coarser material or fiber. The production of shellac records continued throughout the 78 rpm era which lasted until the 1950s in industrialized nations, but well into the 1960s in others. Less abrasive formulations were developed during its waning years and very late examples in like - new condition can have noise levels as low as vinyl.
Flexible, "unbreakable '' alternatives to shellac were introduced by several manufacturers during the 78 rpm era. Beginning in 1904, Nicole Records of the UK coated celluloid or a similar substance onto a cardboard core disc for a few years, but they were noisy. In the United States, Columbia Records introduced flexible, fiber - cored "Marconi Velvet Tone Record '' pressings in 1907, but their longevity and relatively quiet surfaces depended on the use of special gold - plated Marconi Needles and the product was not successful. Thin, flexible plastic records such as the German Phonycord and the British Filmophone and Goodson records appeared around 1930 but not for long. The contemporary French Pathé Cellodiscs, made of a very thin black plastic resembling the vinyl "sound sheet '' magazine inserts of the 1965 -- 1985 era, were similarly short - lived. In the US, Hit of the Week records were introduced in early 1930. They were made of a patented translucent plastic called Durium coated on a heavy brown paper base. A new issue debuted weekly, sold at newsstands like a magazine. Although inexpensive and commercially successful at first, they fell victim to the Great Depression and US production ended in 1932. Durium records continued to be made in the UK and as late as 1950 in Italy, where the name "Durium '' survived into the LP era as a brand of vinyl records. Despite these innovations, shellac continued to be used for the overwhelming majority of commercial 78 rpm records throughout the format 's lifetime.
In 1931, RCA Victor introduced vinyl plastic - based Victrolac as a material for unusual - format and special - purpose records. One was a 16 - inch, 33 ⁄ rpm record used by the Vitaphone sound - on - disc movie system. In 1932, RCA began using Victrolac in a home recording system. By the end of the 1930s vinyl 's light weight, strength, and low surface noise had made it the preferred material for prerecorded radio programming and other critical applications. For ordinary 78 rpm records, however, the much higher cost of the synthetic plastic, as well as its vulnerability to the heavy pickups and mass - produced steel needles used in home record players, made its general substitution for shellac impractical at that time. During the Second World War, the United States Armed Forces produced thousands of 12 - inch vinyl 78 rpm V - Discs for use by the troops overseas. After the war, the use of vinyl became more practical as new record players with lightweight crystal pickups and precision - ground styli made of sapphire or an exotic osmium alloy proliferated. In late 1945, RCA Victor began offering "De Luxe '' transparent red vinyl pressings of some Red Seal classical 78s, at a De luxe price. Later, Decca Records introduced vinyl Deccalite 78s, while other record companies used vinyl formulations trademarked as Metrolite, Merco Plastic, and Sav - o - flex, but these were mainly used to produce "unbreakable '' children 's records and special thin vinyl DJ pressings for shipment to radio stations.
In the 1890s, the recording formats of the earliest (toy) discs were mainly 12.5 cm (nominally 5 inches) in diameter; by the mid-1890s, the discs were usually 7 inches (nominally 17.5 cm) in diameter.
By 1910, the 10 - inch (25.4 cm) record was by far the most popular standard, holding about 3 minutes (180 s) of music or other entertainment on a side.
From 1903 onwards, 12 - inch records (30.5 cm) were also sold commercially, mostly of classical music or operatic selections, with 4 to 5 minutes (240 to 300 s) of music per side. Victor, Brunswick and Columbia also issued 12 - inch popular medleys, usually spotlighting a Broadway show score.
Other sizes also appeared. Eight - inch (20 cm) discs with a 2 - inch - diameter (51 mm) label became popular for about a decade in Britain, but they can not be played in full on most modern record players, since the tone arm can not play far enough in toward the center without modification of the equipment. In 1903, Victor offered a series of 14 - inch (35.5 cm) "Deluxe Special '' records, which sold for two dollars and played at 60 rpm. Fewer than fifty titles were issued and the series was dropped in 1906 due to poor sales. In 1906, A short - lived British firm called Neophone marketed a series of single sided 20 - inch (50 cm) records, offering complete performances of some operatic overtures and shorter pieces. Pathe also issued 14 - inch (35.5 cm) and 20 - inch (50 cm) records around the same period.
The playing time of a phonograph record depends on the available groove length divided by the turntable speed. Total groove length in turn depends on how closely the grooves are spaced, in addition to the record diameter. At the beginning of the 20th century, the early discs played for two minutes, the same as cylinder records. The 12 - inch disc, introduced by Victor in 1903, increased the playing time to three and a half minutes. Because the standard 10 - inch 78 rpm record could hold about three minutes of sound per side, most popular recordings were limited to that duration. For example, when King Oliver 's Creole Jazz Band, including Louis Armstrong on his first recordings, recorded 13 sides at Gennett Records in Richmond, Indiana, in 1923, one side was 2: 09 and four sides were 2: 52 -- 2: 59.
In January 1938, Milt Gabler started recording for Commodore Records, and to allow for longer continuous performances, he recorded some 12 - inch discs. Eddie Condon explained: "Gabler realized that a jam session needs room for development. '' The first two 12 - inch recordings did not take advantage of their capability: "Carnegie Drag '' was 3m 15s; "Carnegie Jump '', 2m 41s. But at the second session, on April 30, the two 12 - inch recordings were longer: "Embraceable You '' was 4m 05s; "Serenade to a Shylock '', 4m 32s. Another way to overcome the time limitation was to issue a selection extending to both sides of a single record. Vaudeville stars Gallagher and Shean recorded "Mr. Gallagher and Mr. Shean '', written by themselves or, allegedly, by Bryan Foy, as two sides of a 10 - inch 78 in 1922 for Victor. Longer musical pieces were released as a set of records. In 1903 HMV in England made the first complete recording of an opera, Verdi 's Ernani, on 40 single - sided discs. In 1940, Commodore released Eddie Condon and his Band 's recording of "A Good Man Is Hard to Find '' in four parts, issued on both sides of two 12 - inch 78s. The limited duration of recordings persisted from their advent until the introduction of the LP record in 1948. In popular music, the time limit of 3 ⁄ minutes on a 10 - inch 78 rpm record meant that singers seldom recorded long pieces. One exception is Frank Sinatra 's recording of Rodgers and Hammerstein 's "Soliloquy '', from Carousel, made on May 28, 1946. Because it ran 7m 57s, longer than both sides of a standard 78 rpm 10 - inch record, it was released on Columbia 's Masterwork label (the classical division) as two sides of a 12 - inch record. The same was true of John Raitt 's performance of the song on the original cast album of Carousel, which had been issued on a 78 - rpm album set by American Decca in 1945.
In the 78 era, classical - music and spoken - word items generally were released on the longer 12 - inch 78s, about 4 -- 5 minutes per side. For example, on June 10, 1924, four months after the February 12 premier of Rhapsody in Blue, George Gershwin recorded an abridged version of the seventeen - minute work with Paul Whiteman and His Orchestra. It was released on two sides of Victor 55225 and ran for 8m 59s.
78 rpm records were normally sold individually in brown paper or cardboard sleeves that were plain, or sometimes printed to show the producer or the retailer 's name. Generally the sleeves had a circular cut - out exposing the record label to view. Records could be laid on a shelf horizontally or stood on an edge, but because of their fragility, breakage was common.
German record company Odeon pioneered the album in 1909 when it released the Nutcracker Suite by Tchaikovsky on 4 double - sided discs in a specially designed package. However, the previous year Deutsche Grammophon had produced an album for its complete recording of the opera Carmen. The practice of issuing albums was not adopted by other record companies for many years. One exception, HMV, produced an album with a pictorial cover for its 1917 recording of The Mikado (Gilbert & Sullivan).
By about 1910, bound collections of empty sleeves with a paperboard or leather cover, similar to a photograph album, were sold as record albums that customers could use to store their records (the term "record album '' was printed on some covers). These albums came in both 10 - inch and 12 - inch sizes. The covers of these bound books were wider and taller than the records inside, allowing the record album to be placed on a shelf upright, like a book, suspending the fragile records above the shelf and protecting them.
In the 1930s, record companies began issuing collections of 78 rpm records by one performer or of one type of music in specially assembled albums, typically with artwork on the front cover and liner notes on the back or inside cover. Most albums included three or four records, with two sides each, making six or eight tunes per album. When the 12 - inch vinyl LP era began in 1948, each disc could hold a similar number of tunes as a typical album of 78s, so they were still referred to as an "album '', as they are today.
For collectible or nostalgia purposes, or for the benefit of higher - quality audio playback provided by the 78 rpm speed with newer vinyl records and their lightweight stylus pickups, a small number of 78 rpm records have been released since the major labels ceased production. One attempt at this was in 1951, when inventor Ewing Dunbar Nunn founded the label Audiophile Records, which released a series of 78 rpm - mastered albums that were microgroove and pressed on vinyl (as opposed to traditional 78s, with their shellac composition and wider 3 - mil sized grooves). This series came in heavy manilla envelopes and began with a jazz album AP - 1 and was soon followed by other AP numbers up through about AP - 19. Around 1953 the standard LP had proven itself to Nunn and he switched to 33 ⁄ rpm and began using art slicks on a more standard cardboard sleeve. The Audiophile numbers can be found into the hundreds today but the most collectable ones are the early 78 rpm releases, especially the first, AP - 1. The 78 rpm speed was mainly to take advantage of the wider audio frequency response that faster speeds like 78 rpm can provide for vinyl microgroove records, hence the label 's name (obviously catering to the audiophiles of the 1950s "hi - fi '' era, when stereo gear could provide a much wider range of audio than before). Also around 1953, Bell Records released a series of budget - priced plastic 7 - inch 78 rpm pop music singles.
In 1968, Reprise planned to release a series of 78 rpm singles from their artists on their label at the time, called the Reprise Speed Series. Only one disc actually saw release, Randy Newman 's "I Think It 's Going to Rain Today '', a track from his self - titled debut album (with "The Beehive State '' on the flipside). Reprise did not proceed further with the series due to a lack of sales for the single, and a lack of general interest in the concept.
In 1978, guitarist and vocalist Leon Redbone released a promotional 78 rpm record featuring two songs ("Alabama Jubilee '' and "Please Do n't Talk About Me When I 'm Gone '') from his Champagne Charlie album.
In 1980, Stiff Records in the United Kingdom issued a 78 by Joe "King '' Carrasco containing the songs "Buena '' (Spanish for "good, '' with the alternate spelling "Bueno '' on the label) and "Tuff Enuff ''. Underground comic cartoonist and 78 rpm record collector Robert Crumb released three vinyl 78s by his Cheap Suit Serenaders in the 1970s.
In the 1990s Rhino Records issued a series of boxed sets of 78 rpm reissues of early rock and roll hits, intended for owners of vintage jukeboxes. The records were made of vinyl, however, and some of the earlier vintage 78 rpm jukeboxes and record players (the ones that were pre-war) were designed with heavy tone arms to play the hard slate - impregnated shellac records of their time. These vinyl Rhino 78 's were softer and would be destroyed by old juke boxes and old record players, but play very well on newer 78 - capable turntables with modern lightweight tone arms and jewel needles.
As a special release for Record Store Day 2011, Capitol re-released The Beach Boys single "Good Vibrations '' in the form of a 10 - inch 78 rpm record (b / w "Heroes and Villains ''). More recently, The Reverend Peyton 's Big Damn Band has released their tribute to blues guitarist Charley Patton Peyton on Patton on both 12 - inch LP and 10 - inch 78 rpm. Both are accompanied with a link to a digital download of the music, acknowledging the probability that purchasers might be unable to play the vinyl recording.
Both the microgroove LP 33 ⁄ rpm record and the 45 rpm single records are made from vinyl plastic that is flexible and unbreakable in normal use, even when they are sent through the mail with care from one place to another. The vinyl records, however, are easier to scratch or gouge, and much more prone to warping compared to most 78 rpm records, which were made of shellac.
In 1931, RCA Victor launched the first commercially available vinyl long - playing record, marketed as program - transcription discs. These revolutionary discs were designed for playback at 33 ⁄ rpm and pressed on a 30 cm diameter flexible plastic disc, with a duration of about ten minutes playing time per side. RCA Victor 's early introduction of a long - play disc was a commercial failure for several reasons including the lack of affordable, reliable consumer playback equipment and consumer wariness during the Great Depression. Because of financial hardships that plagued the recording industry during that period (and RCA 's own parched revenues), Victor 's long - playing records were discontinued by early 1933.
There was also a small batch of longer - playing records issued in the very early 1930s: Columbia introduced 10 - inch longer - playing records (18000 - D series), as well as a series of double - grooved or longer - playing 10 - inch records on their Harmony, Clarion & Velvet Tone "budget '' labels. There were also a couple of longer - playing records issued on ARC (for release on their Banner, Perfect, and Oriole labels) and on the Crown label. All of these were phased out in mid-1932.
Vinyl 's lower surface noise level than shellac was not forgotten, nor was its durability. In the late 1930s, radio commercials and pre-recorded radio programs being sent to disc jockeys started being pressed in vinyl, so they would not break in the mail. In the mid-1940s, special DJ copies of records started being made of vinyl also, for the same reason. These were all 78 rpm. During and after World War II, when shellac supplies were extremely limited, some 78 rpm records were pressed in vinyl instead of shellac, particularly the six - minute 12 - inch (30 cm) 78 rpm records produced by V - Disc for distribution to United States troops in World War II. In the 1940s, radio transcriptions, which were usually on 16 - inch records, but sometimes 12 - inch, were always made of vinyl, but cut at 33 ⁄ rpm. Shorter transcriptions were often cut at 78 rpm.
Beginning in 1939, Dr. Peter Goldmark and his staff at Columbia Records and at CBS Laboratories undertook efforts to address problems of recording and playing back narrow grooves and developing an inexpensive, reliable consumer playback system. It took about eight years of study, except when it was suspended because of World War II. Finally, the 12 - inch (30 cm) Long Play (LP) 33 ⁄ rpm microgroove record album was introduced by the Columbia Record Company at a New York press conference on June 18, 1948. At the same time, Columbia introduced a vinyl 7 - inch 33 ⁄ rpm microgroove single, calling it ZLP, but it was short - lived and is very rare today, because RCA Victor introduced a 45 rpm single a few months later, which became the standard.
Unwilling to accept and license Columbia 's system, in February 1949, RCA Victor released the first 45 rpm single, 7 inches in diameter with a large center hole. The 45 rpm player included a changing mechanism that allowed multiple disks to be stacked, much as a conventional changer handled 78s. The short playing time of a single 45 rpm side meant that long works, such as symphonies, had to be released on multiple 45s instead of a single LP, but RCA claimed that the new high - speed changer rendered side breaks so brief as to be inaudible or inconsequential. Early 45 rpm records were made from either vinyl or polystyrene. They had a playing time of eight minutes.
Another size and format was that of radio transcription discs beginning in the 1940s. These records were usually vinyl, 33 rpm, and 16 inches in diameter. No home record player could accommodate such large records, and they were used mainly by radio stations. They were on average 15 minutes per side and contained several songs or radio program material. These records became less common in the United States when tape recorders began being used for radio transcriptions around 1949. In the UK, analog discs continued to be the preferred medium for the licence of BBC transcriptions to overseas broadcasters until the use of CDs became a practical alternative.
On a few early phonograph systems and radio transcription discs, as well as some entire albums, the direction of the groove is reversed, beginning near the center of the disc and leading to the outside. A small number of records (such as The Monty Python Matching Tie and Handkerchief) were manufactured with multiple separate grooves to differentiate the tracks (usually called "NSC - X2 '').
The earliest rotation speeds varied considerably, but from 1900 - 1925 most records were recorded at 74 -- 82 revolutions per minute (rpm). Edison Disc Records consistently ran at 80 rpm.
At least one attempt to lengthen playing time was made in the early 1920s. World Records produced records that played at a constant linear velocity, controlled by Noel Pemberton Billing 's patented add - on speed governor. As the needle moved from the outside to the inside, the rotational speed of the record gradually increased as the groove diameter decreased. This behavior is similar to the modern compact disc and the CLV version of its predecessor, the (analog encoded) Philips LaserDisc, but is reversed from inside to outside,.
In 1925, 78.26 rpm was standardized when the 60 Hz AC synchronous motor was introduced to power turntables. The motor ran at 3600 rpm, so that a 46: 1 gear ratio would produce 78.26 rpm. In regions of the world that use 50 Hz current, the standard was 77.92 rpm (3,000 rpm with a 77: 2 ratio). At that speed, a strobe disc with 77 lines would "stand still '' in 50 Hz light (92 lines for 60 Hz). After World War II, these records became retroactively known as 78s, to distinguish them from the newer disc record formats known by their rotational speeds. Earlier they were just called records, or when there was a need to distinguish them from cylinders, disc records.
The older 78 rpm format continued to be mass - produced alongside the newer formats using new materials in decreasing numbers until around 1960 in the U.S., and in a few countries, such as the Philippines and India (both countries issued recordings by The Beatles on 78s), into the late 1960s. For example, Columbia Records ' last reissue of Frank Sinatra songs on 78 rpm records was an album called Young at Heart, issued in November, 1954. As late as the early 1970s, some children 's records were released at the 78 rpm speed. In the United Kingdom, the 78 rpm single persisted somewhat longer than in the United States, where it was overtaken in popularity by the 45 rpm in the late 1950s, as teenagers became increasingly affluent.
Some of Elvis Presley 's early singles on Sun Records may have sold more copies on 78 than on 45. This is because of their popularity in 1954 -- 55 in "hillbilly '' market in the South and Southwestern United States, where replacing the family 78 rpm record player with a new 45 rpm player was a luxury few could afford at the time. By the end of 1957, RCA Victor announced that 78s accounted for less than 10 % of Presley 's singles sales, confirming the demise of the 78 rpm format. The last Presley single released on 78 in the United States was RCA Victor 20 - 7410, "I Got Stung '' / "One Night '' (1958), while the last 78 in the UK was RCA 1194, "A Mess Of Blues '' / "Girl Of My Best Friend '' issued in 1960.
After World War II, two new competing formats entered the market, gradually replacing the standard 78 rpm: the 33 ⁄ rpm (often called 33 rpm), and the 45 rpm. The 33 ⁄ rpm LP (for "long - play '') format was developed by Columbia Records and marketed in June 1948. The first LP release consisted of 85 12 - inch classical pieces starting with the Mendelssohn violin concerto, Nathan Milstein violinist, Philharmonic Symphony of New York conducted by Bruno Walter, Columbia ML - 4001. Also released in June 1948 were three series of 10 - inch "LPs '' and a 7 - inch "ZLP ''. RCA Victor developed the 45 rpm format and marketed it in March 1949. The 45s released by RCA in March 1949 were in seven different colors of vinyl depending on the type of music recorded, blues, country, popular etc. Columbia and RCA Victor each pursued their R&D secretly. Both types of new disc used narrower grooves, intended to be played with smaller stylus -- typically 0.001 inches ("1 mil '', 25 μm) wide, compared to 0.003 inches (76 μm) for a 78 -- so the new records were sometimes called Microgroove. In the mid-1950s all record companies agreed to a common frequency response standard called RIAA equalization. Prior to the establishment of the standard each company used its own preferred equalization, requiring discriminating listeners to use pre-amplifiers with selectable equalization curves.
Some recordings, such as books for the blind, were pressed for playing at 16 ⁄ rpm. Prestige Records released jazz records in this format in the late 1950s; for example, two of their Miles Davis albums were paired together in this format. Peter Goldmark, the man who developed the 33 ⁄ rpm record, developed the Highway Hi - Fi 16 ⁄ rpm record to be played in Chrysler automobiles, but poor performance of the system and weak implementation by Chrysler and Columbia led to the demise of the 16 ⁄ rpm records. Subsequently, the 16 ⁄ rpm speed was used for narrated publications for the blind and visually impaired, and were never widely commercially available, although it was common to see new turntable models with a 16 rpm speed setting produced as late as the 1970s.
Seeburg Corporation introduced the Seeburg Background Music System in 1959, using a 16 ⁄ rpm 9 - inch record with 2 - inch center hole. Each record held 40 minutes of music per side, recorded at 420 grooves per inch.
The commercial rivalry between RCA Victor and Columbia Records led to RCA Victor 's introduction of what it had intended to be a competing vinyl format, the 7 - inch (175 mm) 45 rpm disc. For a two - year period from 1948 to 1950, record companies and consumers faced uncertainty over which of these formats would ultimately prevail in what was known as the "War of the Speeds ''. (See also format war.) In 1949 Capitol and Decca adopted the new LP format and RCA Victor gave in and issued its first LP in January 1950. The 45 rpm size was gaining in popularity, too, and Columbia issued its first 45s in February 1951. By 1954, 200 million 45s had been sold.
Eventually the 12 - inch (300 mm) 33 ⁄ rpm LP prevailed as the dominant format for musical albums, and 10 - inch LPs were no longer issued. The last Columbia Records reissue of any Frank Sinatra songs on a 10 - inch LP record was an album called Hall of Fame, CL 2600, issued on October 26, 1956, containing six songs, one each by Tony Bennett, Rosemary Clooney, Johnnie Ray, Frank Sinatra, Doris Day, and Frankie Laine. The 10 - inch LP had a longer life in the United Kingdom, where important early British rock and roll albums such as Lonnie Donegan 's Lonnie Donegan Showcase and Billy Fury 's The Sound of Fury were released in that form. The 7 - inch (175 mm) 45 rpm disc or "single '' established a significant niche for shorter duration discs, typically containing one item on each side. The 45 rpm discs typically emulated the playing time of the former 78 rpm discs, while the 12 - inch LP discs eventually provided up to one half - hour of recorded material per side.
The 45 rpm discs also came in a variety known as extended play (EP), which achieved up to 10 -- 15 minutes play at the expense of attenuating (and possibly compressing) the sound to reduce the width required by the groove. EP discs were cheaper to produce, and were used in cases where unit sales were likely to be more limited or to reissue LP albums on the smaller format for those people who had only 45 rpm players. LP albums could be purchased one EP at a time, with four items per EP, or in a boxed set with three EPs or twelve items. The large center hole on 45s allows for easier handling by jukebox mechanisms. EPs were generally discontinued by the late 1950s in the U.S. as three - and four - speed record players replaced the individual 45 players. One indication of the decline of the 45 rpm EP is that the last Columbia Records reissue of Frank Sinatra songs on 45 rpm EP records, called Frank Sinatra (Columbia B - 2641) was issued on December 7, 1959. The EP lasted considerably longer in Europe, and was a popular format during the 1960s for recordings by artists such as Serge Gainsbourg and the Beatles.
In the late 1940s and early 1950s, 45 rpm - only players that lacked speakers and plugged into a jack on the back of a radio were widely available. Eventually, they were replaced by the three -- speed record player.
From the mid-1950s through the 1960s, in the U.S. the common home record player or "stereo '' (after the introduction of stereo recording) would typically have had these features: a three - or four - speed player (78, 45, 33 ⁄, and sometimes 16 ⁄ rpm); with changer, a tall spindle that would hold several records and automatically drop a new record on top of the previous one when it had finished playing, a combination cartridge with both 78 and microgroove styli and a way to flip between the two; and some kind of adapter for playing the 45s with their larger center hole. The adapter could be a small solid circle that fit onto the bottom of the spindle (meaning only one 45 could be played at a time) or a larger adaptor that fit over the entire spindle, permitting a stack of 45s to be played.
RCA Victor 45s were also adapted to the smaller spindle of an LP player with a plastic snap - in insert known as a "spider ''. These inserts, commissioned by RCA president David Sarnoff and invented by Thomas Hutchison, were prevalent starting in the 1960s, selling in the tens of millions per year during the 45 rpm heyday. In countries outside the U.S., 45s often had the smaller album - sized holes, e.g., Australia and New Zealand, or as in the United Kingdom, especially before the 1970s, the disc had a small hole within a circular central section held only by three or four lands so that it could be easily punched out if desired (typically for use in jukeboxes).
During the vinyl era, various developments were made or introduced. Stereo finally lost its previous experimental status, and eventually became standard internationally. Quadraphonic sound effectively had to wait for digital formats before finding a permanent position in the market place.
The term "high fidelity '' was coined in the 1920s by some manufacturers of radio receivers and phonographs to differentiate their better - sounding products claimed as providing "perfect '' sound reproduction. The term began to be used by some audio engineers and consumers through the 1930s and 1940s. After 1949 a variety of improvements in recording and playback technologies, especially stereo recordings, which became widely available in 1958, gave a boost to the "hi - fi '' classification of products, leading to sales of individual components for the home such as amplifiers, loudspeakers, phonographs, and tape players. High Fidelity and Audio were two magazines that hi - fi consumers and engineers could read for reviews of playback equipment and recordings.
Stereophonic sound recording, which attempts to provide a more natural listening experience by reproducing the spatial locations of sound sources in the horizontal plane, was the natural extension to monophonic recording, and attracted various alternative engineering attempts. The ultimately dominant "45 / 45 '' stereophonic record system was invented by Alan Blumlein of EMI in 1931 and patented the same year. EMI cut the first stereo test discs using the system in 1933 (see Bell Labs Stereo Experiments of 1933) although the system was not exploited commercially until much later.
In this system, each of two stereo channels is carried independently by a separate groove wall, each wall face moving at 45 degrees to the plane of the record surface (hence the system 's name) in correspondence with the signal level of that channel. By convention, the inner wall carries the left - hand channel and the outer wall carries the right - hand channel.
While the stylus only moves horizontally when reproducing a monophonic disk recording, on stereo records the stylus moves vertically as well as horizontally. During playback, the movement of a single stylus tracking the groove is sensed independently, e.g., by two coils, each mounted diagonally opposite the relevant groove wall.
The combined stylus motion can be represented in terms of the vector sum and difference of the two stereo channels. Vertical stylus motion then carries the L − R difference signal and horizontal stylus motion carries the L + R summed signal, the latter representing the monophonic component of the signal in exactly the same manner as a purely monophonic record.
The advantages of the 45 / 45 system as compared to alternative systems were:
In 1957 the first commercial stereo two - channel records were issued first by Audio Fidelity followed by a translucent blue vinyl on Bel Canto Records, the first of which was a multi-colored - vinyl sampler featuring A Stereo Tour of Los Angeles narrated by Jack Wagner on one side, and a collection of tracks from various Bel Canto albums on the back.
Following in 1958, more stereo LP releases were offered by Audio Fidelity Records in the US and Pye Records in Britain. However, it was not until the mid-to - late 1960s that the sales of stereophonic LPs overtook those of their monophonic equivalents, and became the dominant record type.
The development of quadraphonic records was announced in 1971. These recorded four separate sound signals. This was achieved on the two stereo channels by electronic matrixing, where the additional channels were combined into the main signal. When the records were played, phase - detection circuits in the amplifiers were able to decode the signals into four separate channels. There were two main systems of matrixed quadraphonic records produced, confusingly named SQ (by CBS) and QS (by Sansui). They proved commercially unsuccessful, but were an important precursor to later surround sound systems, as seen in SACD and home cinema today.
A different format, Compatible Discrete 4 (CD - 4) (not to be confused with Compact Disc) was introduced by RCA. This system encoded the front - rear difference information on an ultrasonic carrier. The system required a compatible cartridge to capture it on carefully calibrated pickup arm / turntable combinations. CD - 4 was less successful than matrix formats. (A further problem was that no cutting heads were available that could handle the high frequency information. This was remedied by cutting at half the speed. Later, the special half - speed cutting heads and equalization techniques were employed to get wider frequency response in stereo with reduced distortion and greater headroom.)
Under the direction of recording engineer C. Robert Fine, Mercury Records initiated a minimalist single microphone monaural recording technique in 1951. The first record, a Chicago Symphony Orchestra performance of Pictures at an Exhibition, conducted by Rafael Kubelik, was described as "being in the living presence of the orchestra '' by The New York Times music critic. The series of records was then named Mercury Living Presence. In 1955, Mercury began three - channel stereo recordings, still based on the principle of the single microphone. The center (single) microphone was of paramount importance, with the two side mics adding depth and space. Record masters were cut directly from a three - track to two - track mixdown console, with all editing of the master tapes done on the original three - tracks. In 1961, Mercury enhanced this technique with three - microphone stereo recordings using 35 mm magnetic film instead of ⁄ - inch tape for recording. The greater thickness and width of 35 mm magnetic film prevented tape layer print - through and pre-echo and gained extended frequency range and transient response. The Mercury Living Presence recordings were remastered to CD in the 1990s by the original producer, Wilma Cozart Fine, using the same method of three - to - two mix directly to the master recorder.
Through the 1960s, 1970s, and 1980s, various methods to improve the dynamic range of mass - produced records involved highly advanced disc cutting equipment. These techniques, marketed, to name two, as the CBS DisComputer and Teldec Direct Metal Mastering, were used to reduce inner - groove distortion. RCA Victor introduced another system to reduce dynamic range and achieve a groove with less surface noise under the commercial name of Dynagroove. Two main elements were combined: another disk material with less surface noise in the groove and dynamic compression for masking background noise. Sometimes this was called "diaphragming '' the source material and not favoured by some music lovers for its unnatural side effects. Both elements were reflected in the brandname of Dynagroove, described elsewhere in more detail. It also used the earlier advanced method of forward - looking control on groove spacing with respect to volume of sound and position on the disk. Lower recorded volume used closer spacing; higher recorded volume used wider spacing, especially with lower frequencies. Also, the higher track density at lower volumes enabled disk recordings to end farther away from the disk center than usual, helping to reduce endtrack distortion even further.
Also in the late 1970s, "direct - to - disc '' records were produced, aimed at an audiophile niche market. These completely bypassed the use of magnetic tape in favor of a "purist '' transcription directly to the master lacquer disc. Also during this period, half - speed mastered and "original master '' records were released, using expensive state - of - the - art technology. A further late 1970s development was the Disco Eye - Cued system used mainly on Motown 12 - inch singles released between 1978 and 1980. The introduction, drum - breaks, or choruses of a track were indicated by widely separated grooves, giving a visual cue to DJs mixing the records. The appearance of these records is similar to an LP, but they only contain one track each side.
The mid-1970s saw the introduction of dbx - encoded records, again for the audiophile niche market. These were completely incompatible with standard record playback preamplifiers, relying on the dbx compandor encoding / decoding scheme to greatly increase dynamic range (dbx encoded disks were recorded with the dynamic range compressed by a factor of two: quiet sounds were meant to be played back at low gain and loud sounds were meant to be played back at high gain, via automatic gain control in the playback equipment; this reduced the effect of surface noise on quiet passages). A similar and very short - lived scheme involved using the CBS - developed "CX '' noise reduction encoding / decoding scheme.
ELPJ, a Japanese - based company, sells a laser turntable that uses a laser to read vinyl discs optically, without physical contact. The laser turntable eliminates record wear and the possibility of accidental scratches, which degrade the sound, but its expense limits use primarily to digital archiving of analog records, and the laser does not play back colored vinyl or picture discs. Various other laser - based turntables were tried during the 1990s, but while a laser reads the groove very accurately, since it does not touch the record, the dust that vinyl attracts due to static electric charge is not mechanically pushed out of the groove, worsening sound quality in casual use compared to conventional stylus playback.
In some ways similar to the laser turntable is the IRENE scanning machine for disc records, which images with microphotography in two dimensions, invented by a team of physicists at Lawrence Berkeley Laboratories. IRENE will retrieve the information from a laterally modulated monaural grooved sound source without touching the medium itself, but can not read vertically modulated information. This excludes grooved recordings such as cylinders and some radio transcriptions that feature a hill - and - dale format of recording, and stereophonic or quadraphonic grooved recordings, which utilize a combination of the two as well as supersonic encoding for quadraphonic.
An offshoot of IRENE, the Confocal Microscope Cylinder Project, can capture a high - resolution three - dimensional image of the surface, down to 200 μm. In order to convert to a digital sound file, this is then played by a version of the same ' virtual stylus ' program developed by the research team in real - time, converted to digital and, if desired, processed through sound - restoration programs.
As recording technology evolved, more specific terms for various types of phonograph records were used in order to describe some aspect of the record: either its correct rotational speed (" 16 ⁄ rpm '' (revolutions per minute), " 33 ⁄ rpm '', "45 rpm '', "78 rpm '') or the material used (particularly "vinyl '' to refer to records made of polyvinyl chloride, or the earlier "shellac records '' generally the main ingredient in 78s).
Terms such as "long - play '' (LP) and "extended - play '' (EP) describe multi-track records that play much longer than the single - item - per - side records, which typically do not go much past four minutes per side. An LP can play for up to 30 minutes per side, though most played for about 22 minutes per side, bringing the total playing time of a typical LP recording to about forty - five minutes. Many pre-1952 LPs, however, played for about 15 minutes per side. The 7 - inch 45 rpm format normally contains one item per side but a 7 - inch EP could achieve recording times of 10 to 15 minutes at the expense of attenuating and compressing the sound to reduce the width required by the groove. EP discs were generally used to make available tracks not on singles including tracks on LPs albums in a smaller, less expensive format for those who had only 45 rpm players. The large center hole on 7 - inch 45 rpm records allows for easier handling by jukebox mechanisms. The term "album '', originally used to mean a "book '' with liner notes, holding several 78 rpm records each in its own "page '' or sleeve, no longer has any relation to the physical format: a single LP record, or nowadays more typically a compact disc.
The usual diameters of the holes are 0.286 inches (7.26 mm) with larger holes on singles in the USA being 1.5 inches (38.1 mm).
Sizes of records in the United States and the UK are generally measured in inches, e.g. 7 - inch records, which are generally 45 rpm records. LPs were 10 - inch records at first, but soon the 12 - inch size became by far the most common. Generally, 78s were 10 - inch, but 12 - inch and 7 - inch and even smaller were made -- -- the so - called "little wonders ''.
Flexi discs were thin flexible records that were distributed with magazines and as promotional gifts from the 1960s to the 1980s.
In March 1949, as RCA released the 45, Columbia released several hundred 7 - inch 33 ⁄ rpm small spindle hole singles. This format was soon dropped as it became clear that the RCA 45 was the single of choice and the Columbia 12 - inch LP would be the "album '' of choice. The first release of the 45 came in seven colors: black 47 - xxxx popular series, yellow 47 - xxxx juvenile series, green (teal) 48 - xxxx country series, deep red 49 - xxxx classical series, bright red (cerise) 50 - xxxx blues / spiritual series, light blue 51 - xxxx international series, dark blue 52 - xxxx light classics. All colors were soon dropped in favor of black because of production problems. However, yellow and deep red were continued until about 1952. The first 45 rpm record created for sale was "PeeWee the Piccolo '' RCA 47 - 0147 pressed in yellow translucent vinyl at the Sherman Avenue plant, Indianapolis on December 7, 1948, by R.O. Price, plant manager.
In the 1970s, the government of Bhutan produced now - collectible postage stamps on playable vinyl mini-discs.
The normal commercial disc is engraved with two sound - bearing concentric spiral grooves, one on each side, running from the outside edge towards the center. The last part of the spiral meets an earlier part to form a circle. The sound is encoded by fine variations in the edges of the groove that cause a stylus (needle) placed in it to vibrate at acoustic frequencies when the disc is rotated at the correct speed. Generally, the outer and inner parts of the groove bear no intended sound (exceptions include the Beatles ' Sgt. Pepper 's Lonely Hearts Club Band and Split Enz 's Mental Notes).
Increasingly from the early 20th century, and almost exclusively since the 1920s, both sides of the record have been used to carry the grooves. Occasional records have been issued since then with a recording on only one side. In the 1980s Columbia records briefly issued a series of less expensive one - sided 45 rpm singles.
The majority of non-78 rpm records are pressed on black vinyl. The coloring material used to blacken the transparent PVC plastic mix is carbon black, which increases the strength of the disc and makes it opaque. Polystyrene is often used for 7 - inch records.
Some records are pressed on colored vinyl or with paper pictures embedded in them ("picture discs ''). Certain 45 rpm RCA or RCA Victor Red Seal records used red translucent vinyl for extra "Red Seal '' effect. During the 1980s there was a trend for releasing singles on colored vinyl -- sometimes with large inserts that could be used as posters. This trend has been revived recently with 7 - inch singles.
Since its inception in 1948, vinyl record standards for the United States follow the guidelines of the Recording Industry Association of America (RIAA). The inch dimensions are nominal, not precise diameters. The actual dimension of a 12 - inch record is 302 mm (11.89 in), for a 10 - inch it is 250 mm (9.84 in), and for a 7 - inch it is 175 mm (6.89 in).
Records made in other countries are standardized by different organizations, but are very similar in size. The record diameters are typically nominally 300 mm, 250 mm and 175 mm.
There is an area about 3 mm (0.12 in) wide at the outer edge of the disk, called the lead - in or run - in, where the groove is widely spaced and silent. The stylus is lowered onto the lead - in, without damaging the recorded section of the groove.
Between tracks on the recorded section of an LP record there is usually a short gap of around 1 mm (0.04 in) where the groove is widely spaced. This space is clearly visible, making it easy to find a particular track.
Towards the center, at the end of the groove, there is another wide - pitched section known as the lead - out. At the very end of this section the groove joins itself to form a complete circle, called the lock groove; when the stylus reaches this point, it circles repeatedly until lifted from the record. On some recordings (for example Sgt. Pepper 's Lonely Hearts Club Band by The Beatles, Super Trouper by ABBA and Atom Heart Mother by Pink Floyd), the sound continues on the lock groove, which gives a strange repeating effect. Automatic turntables rely on the position or angular velocity of the arm, as it reaches the wider spacing in the groove, to trigger a mechanism that lifts the arm off the record. Precisely because of this mechanism, most automatic turntables are incapable of playing any audio in the lock groove, since they will lift the arm before it reaches that groove.
The catalog number and stamper ID is written or stamped in the space between the groove in the lead - out on the master disc, resulting in visible recessed writing on the final version of a record. Sometimes the cutting engineer might add handwritten comments or their signature, if they are particularly pleased with the quality of the cut. These are generally referred to as "run - out etchings ''.
When auto - changing turntables were commonplace, records were typically pressed with a raised (or ridged) outer edge and a raised label area, allowing records to be stacked onto each other without the delicate grooves coming into contact, reducing the risk of damage. Auto - changers included a mechanism to support a stack of several records above the turntable itself, dropping them one at a time onto the active turntable to be played in order. Many longer sound recordings, such as complete operas, were interleaved across several 10 - inch or 12 - inch discs for use with auto - changing mechanisms, so that the first disk of a three - disk recording would carry sides 1 and 6 of the program, while the second disk would carry sides 2 and 5, and the third, sides 3 and 4, allowing sides 1, 2, and 3 to be played automatically; then the whole stack reversed to play sides 4, 5, and 6.
The sound quality and durability of vinyl records is highly dependent on the quality of the vinyl. During the early 1970s, as a cost - cutting move, much of the industry began reducing the thickness and quality of vinyl used in mass - market manufacturing. The technique was marketed by RCA Victor as the Dynaflex (125 g) process, but was considered inferior by most record collectors. Most vinyl records are pressed from a mix of 70 % virgin and 30 % recycled vinyl.
New or "virgin '' heavy / heavyweight (180 -- 220 g) vinyl is commonly used for modern audiophile vinyl releases in all genres. Many collectors prefer to have heavyweight vinyl albums, which have been reported to have better sound than normal vinyl because of their higher tolerance against deformation caused by normal play. 180 g vinyl is more expensive to produce only because it uses more vinyl. Manufacturing processes are identical regardless of weight. In fact, pressing lightweight records requires more care. An exception is the propensity of 200 g pressings to be slightly more prone to non-fill, when the vinyl biscuit does not sufficiently fill a deep groove during pressing (percussion or vocal amplitude changes are the usual locations of these artifacts). This flaw causes a grinding or scratching sound at the non-fill point.
Since most vinyl records contain up to 30 % recycled vinyl, impurities can accumulate in the record and cause even a brand - new record to have audio artifacts such as clicks and pops. Virgin vinyl means that the album is not from recycled plastic, and will theoretically be devoid of these impurities. In practice, this depends on the manufacturer 's quality control.
The "orange peel '' effect on vinyl records is caused by worn molds. Rather than having the proper mirror - like finish, the surface of the record will have a texture that looks like orange peel. This introduces noise into the record, particularly in the lower frequency range. With direct metal mastering (DMM), the master disc is cut on a copper - coated disc, which can also have a minor "orange peel '' effect on the disc itself. As this "orange peel '' originates in the master rather than being introduced in the pressing stage, there is no ill effect as there is no physical distortion of the groove.
Original master discs are created by lathe - cutting: a lathe is used to cut a modulated groove into a blank record. The blank records for cutting used to be cooked up, as needed, by the cutting engineer, using what Robert K. Morrison describes as a "metallic soap '', containing lead litharge, ozokerite, barium sulfate, montan wax, stearin and paraffin, among other ingredients. Cut "wax '' sound discs would be placed in a vacuum chamber and gold - sputtered to make them electrically conductive for use as mandrels in an electroforming bath, where pressing stamper parts were made. Later, the French company Pyral invented a ready - made blank disc having a thin nitro - cellulose lacquer coating (approximately 7 mils thickness on both sides) that was applied to an aluminum substrate. Lacquer cuts result in an immediately playable, or processable, master record. If vinyl pressings are wanted, the still - unplayed sound disc is used as a mandrel for electroforming nickel records that are used for manufacturing pressing stampers. The electroformed nickel records are mechanically separated from their respective mandrels. This is done with relative ease because no actual "plating '' of the mandrel occurs in the type of electrodeposition known as electroforming, unlike with electroplating, in which the adhesion of the new phase of metal is chemical and relatively permanent. The one - molecule - thick coating of silver (that was sprayed onto the processed lacquer sound disc in order to make its surface electrically conductive) reverse - plates onto the nickel record 's face. This negative impression disc (having ridges in place of grooves) is known as a nickel master, "matrix '' or "father ''. The "father '' is then used as a mandrel to electroform a positive disc known as a "mother ''. Many mothers can be grown on a single "father '' before ridges deteriorate beyond effective use. The "mothers '' are then used as mandrels for electroforming more negative discs known as "sons ''. Each "mother '' can be used to make many "sons '' before deteriorating. The "sons '' are then converted into "stampers '' by center - punching a spindle hole (which was lost from the lacquer sound disc during initial electroforming of the "father ''), and by custom - forming the target pressing profile. This allows them to be placed in the dies of the target (make and model) record press and, by center - roughing, to facilitate the adhesion of the label, which gets stuck onto the vinyl pressing without any glue. In this way, several million vinyl discs can be produced from a single lacquer sound disc. When only a few hundred discs are required, instead of electroforming a "son '' (for each side), the "father '' is removed of its silver and converted into a stamper. Production by this latter method, known as the "two - step process '' (as it does not entail creation of "sons '' but does involve creation of "mothers '', which are used for test playing and kept as "safeties '' for electroforming future "sons '') is limited to a few hundred vinyl pressings. The pressing count can increase if the stamper holds out and the quality of the vinyl is high. The "sons '' made during a "three - step '' electroforming make better stampers since they do n't require silver removal (which reduces some high fidelity because of etching erasing part of the smallest groove modulations) and also because they have a stronger metal structure than "fathers ''.
Shellac 78s are fragile, and must be handled carefully. In the event of a 78 breaking, the pieces might remain loosely connected by the label and still be playable if the label holds them together, although there is a loud pop with each pass over the crack, and breaking of the stylus is likely.
Breakage was very common in the shellac era. In the 1934 John O'Hara novel, Appointment in Samarra, the protagonist "broke one of his most favorites, Whiteman 's Lady of the Evening... He wanted to cry but could not. '' A poignant moment in J.D. Salinger 's 1951 novel The Catcher in the Rye occurs after the adolescent protagonist buys a record for his younger sister but drops it and "it broke into pieces... I damn - near cried, it made me feel so terrible. '' A sequence where a school teacher 's collection of 78 rpm jazz records is smashed by a group of rebellious students is a key moment in the film Blackboard Jungle.
Another problem with shellac was that the size of the disks tended to be larger because it was limited to 80 -- 100 groove walls per inch before the risk of groove collapse became too high, whereas vinyl could have up to 260 groove walls per inch.
By the time World War II began, major labels were experimenting with laminated records. As stated above, and in several record advertisements of the period, the materials that make for a quiet surface (shellac) are notoriously weak and fragile. Conversely the materials that make for a strong disc (cardboard and other fiber products) are not those known for allowing a quiet noise - free surface.
Vinyl records do not break easily, but the soft material is easily scratched. Vinyl readily acquires a static charge, attracting dust that is difficult to remove completely. Dust and scratches cause audio clicks and pops. In extreme cases, they can cause the needle to skip over a series of grooves, or worse yet, cause the needle to skip backwards, creating a "locked groove '' that repeats over and over. This is the origin of the phrase "like a broken record '' or "like a scratched record '', which is often used to describe a person or thing that continually repeats itself. Locked grooves are not uncommon and were even heard occasionally in radio broadcasts.
Vinyl records can be warped by heat, improper storage, exposure to sunlight, or manufacturing defects such as excessively tight plastic shrinkwrap on the album cover. A small degree of warp was common, and allowing for it was part of the art of turntable and tonearm design. "Wow '' (once - per - revolution pitch variation) could result from warp, or from a spindle hole that was not precisely centered. Standard practice for LPs was to place the LP in a paper or plastic inner cover. This, if placed within the outer cardboard cover so that the opening was entirely within the outer cover, was said to reduce ingress of dust onto the record surface. Singles, with rare exceptions, had simple paper covers with no inner cover.
A further limitation of the gramophone record is that fidelity steadily declines as playback progresses; there is more vinyl per second available for fine reproduction of high frequencies at the large - diameter beginning of the groove than exist at the smaller - diameters close to the end of the side. At the start of a groove on an LP there are 510 mm of vinyl per second traveling past the stylus while the ending of the groove gives 200 -- 210 mm of vinyl per second -- less than half the linear resolution. Distortion towards the end of the side is likely to become more apparent as record wear increases.
Another problem arises because of the geometry of the tonearm. Master recordings are cut on a recording lathe where a sapphire stylus moves radially across the blank, suspended on a straight track and driven by a lead screw. Most turntables use a pivoting tonearm, introducing side forces and pitch and azimuth errors, and thus distortion in the playback signal. Various mechanisms were devised in attempts to compensate, with varying degrees of success. See more at phonograph.
There is controversy about the relative quality of CD sound and LP sound when the latter is heard under the very best conditions (see Analog vs. Digital sound argument). It is notable, however, that one technical advantage with vinyl compared to the optical CD is that if correctly handled and stored, the vinyl record will be playable for centuries, which is longer than some versions of the optical CD.
In 1925, electric recording extended the recorded frequency range from acoustic recording (168 -- 2,000 Hz) by 2 ⁄ octaves to 100 -- 5,000 Hz. Even so, these early electronically recorded records used the exponential - horn phonograph (see Orthophonic Victrola) for reproduction.
CD - 4 LPs contain two sub-carriers, one in the left groove wall and one in the right groove wall. These sub-carriers use special FM - PM - SSBFM (Frequency Modulation - Phase Modulation - Single Sideband Frequency Modulation) and have signal frequencies that extend to 45 kHz. CD - 4 sub-carriers could be played with any type stylus as long as the pickup cartridge had CD - 4 frequency response. The recommended stylus for CD - 4 as well as regular stereo records was a line contact or Shibata type.
Gramophone sound includes rumble, which is low - frequency (below about 30 Hz) mechanical noise generated by the motor bearings and picked up by the stylus. Equipment of modest quality is relatively unaffected by these issues, as the amplifier and speaker will not reproduce such low frequencies, but high - fidelity turntable assemblies need careful design to minimize audible rumble.
Room vibrations will also be picked up if the connections from the pedestal to / from turntable to the pickup arm are not well isolated.
Tonearm skating forces and other perturbations are also picked up by the stylus. This is a form of frequency multiplexing as the control signal (restoring force) used to keep the stylus in the groove is carried by the same mechanism as the sound itself. Subsonic frequencies below about 20 Hz in the audio signal are dominated by tracking effects, which is one form of unwanted rumble ("tracking noise '') and merges with audible frequencies in the deep bass range up to about 100 Hz. High fidelity sound equipment can reproduce tracking noise and rumble. During a quiet passage, woofer speaker cones can sometimes be seen to vibrate with the subsonic tracking of the stylus, at frequencies as low as just above 0.5 Hz (the frequency at which a 33 ⁄ rpm record turns on the turntable; ⁄ Hz exactly on an ideal turntable). Another reason for very low frequency material can be a warped disk: its undulations produce frequencies of only a few hertz and present day amplifiers have large power bandwidths. For this reason, many stereo receivers contained a switchable subsonic filter. Some subsonic content is directly out of phase in each channel. If played back on a mono subwoofer system, the noise will cancel, significantly reducing the amount of rumble that is reproduced.
High frequency hiss is generated as the stylus rubs against the vinyl, and dirt and dust on the vinyl produces popping and ticking sounds. The latter can be reduced somewhat by cleaning the record prior to playback.
Due to recording mastering and manufacturing limitations, both high and low frequencies were removed from the first recorded signals by various formulae. With low frequencies, the stylus must swing a long way from side to side, requiring the groove to be wide, taking up more space and limiting the playing time of the record. At high frequencies, hiss, pops, and ticks are significant. These problems can be reduced by using equalization to an agreed standard. During recording the amplitude of low frequencies is reduced, thus reducing the groove width required, and the amplitude at high frequencies is increased. The playback equipment boosts bass and cuts treble so as to restore the tonal balance in the original signal; this also reduces the high frequency noise. Thus more music will fit on the record, and noise is reduced.
The current standard is called RIAA equalization. It was agreed upon in 1952 and implemented in the United States in 1955; it was not widely used in other countries until the 1970s. Prior to that, especially from 1940, some 100 different formulae were used by the record manufacturers.
In 1926 Joseph P. Maxwell and Henry C. Harrison from Bell Telephone Laboratories disclosed that the recording pattern of the Western Electric "rubber line '' magnetic disc cutter had a constant velocity characteristic. This meant that as frequency increased in the treble, recording amplitude decreased. Conversely, in the bass as frequency decreased, recording amplitude increased. Therefore, it was necessary to attenuate the bass frequencies below about 250 Hz, the bass turnover point, in the amplified microphone signal fed to the recording head. Otherwise, bass modulation became excessive and overcutting took place into the next record groove. When played back electrically with a magnetic pickup having a smooth response in the bass region, a complementary boost in amplitude at the bass turnover point was necessary. G.H. Miller in 1934 reported that when complementary boost at the turnover point was used in radio broadcasts of records, the reproduction was more realistic and many of the musical instruments stood out in their true form.
West in 1930 and later P.G.A.H. Voigt (1940) showed that the early Wente - style condenser microphones contributed to a 4 to 6 dB midrange brilliance or pre-emphasis in the recording chain. This meant that the electrical recording characteristics of Western Electric licensees such as Columbia Records and Victor Talking Machine Company in the 1925 era had a higher amplitude in the midrange region. Brilliance such as this compensated for dullness in many early magnetic pickups having drooping midrange and treble response. As a result, this practice was the empirical beginning of using pre-emphasis above 1,000 Hz in 78 rpm and 33 ⁄ rpm records.
Over the years a variety of record equalization practices emerged and there was no industry standard. For example, in Europe recordings for years required playback with a bass turnover setting of 250 -- 300 Hz and a treble roll - off at 10,000 Hz ranging from 0 to − 5 dB or more. In the US there were more varied practices and a tendency to use higher bass turnover frequencies such as 500 Hz as well as a greater treble rolloff like − 8.5 dB and even more to record generally higher modulation levels on the record.
Evidence from the early technical literature concerning electrical recording suggests that it was n't until the 1942 -- 1949 period that there were serious efforts to standardize recording characteristics within an industry. Heretofore, electrical recording technology from company to company was considered a proprietary art all the way back to the 1925 Western Electric licensed method used by Columbia and Victor. For example, what Brunswick - Balke - Collender (Brunswick Corporation) did was different from the practices of Victor.
Broadcasters were faced with having to adapt daily to the varied recording characteristics of many sources: various makers of "home recordings '' readily available to the public, European recordings, lateral - cut transcriptions, and vertical - cut transcriptions. Efforts were started in 1942 to standardize within the National Association of Broadcasters (NAB), later known as the National Association of Radio and Television Broadcasters (NARTB). The NAB, among other items, issued recording standards in 1949 for laterally and vertically cut records, principally transcriptions. A number of 78 rpm record producers as well as early LP makers also cut their records to the NAB / NARTB lateral standard.
The lateral cut NAB curve was remarkably similar to the NBC Orthacoustic curve that evolved from practices within the National Broadcasting Company since the mid-1930s. Empirically, and not by any formula, it was learned that the bass end of the audio spectrum below 100 Hz could be boosted somewhat to override system hum and turntable rumble noises. Likewise at the treble end beginning at 1,000 Hz, if audio frequencies were boosted by 16 dB at 10,000 Hz the delicate sibilant sounds of speech and high overtones of musical instruments could survive the noise level of cellulose acetate, lacquer -- aluminum, and vinyl disc media. When the record was played back using a complementary inverse curve, signal - to - noise ratio was improved and the programming sounded more lifelike.
When the Columbia LP was released in June 1948, the developers subsequently published technical information about the 33 ⁄ rpm microgroove long playing record. Columbia disclosed a recording characteristic showing that it was like the NAB curve in the treble, but had more bass boost or pre-emphasis below 200 Hz. The authors disclosed electrical network characteristics for the Columbia LP curve. This was the first such curve based on formulae.
In 1951, at the beginning of the post-World War II high fidelity (hi - fi) popularity, the Audio Engineering Society (AES) developed a standard playback curve. This was intended for use by hi - fi amplifier manufacturers. If records were engineered to sound good on hi - fi amplifiers using the AES curve, this would be a worthy goal towards standardization. This curve was defined by the time constants of audio filters and had a bass turnover of 400 Hz and a 10,000 Hz rolloff of − 12 dB.
RCA Victor and Columbia were in a market war concerning which recorded format was going to win: the Columbia LP versus the RCA Victor 45 rpm disc (released in February 1949). Besides also being a battle of disc size and record speed, there was a technical difference in the recording characteristics. RCA Victor was using "new orthophonic '', whereas Columbia was using the LP curve.
Ultimately, the New Orthophonic curve was disclosed in a publication by R.C. Moyer of RCA Victor in 1953. He traced RCA Victor characteristics back to the Western Electric "rubber line '' recorder in 1925 up to the early 1950s laying claim to long - held recording practices and reasons for major changes in the intervening years. The RCA Victor New Orthophonic curve was within the tolerances for the NAB / NARTB, Columbia LP, and AES curves. It eventually became the technical predecessor to the RIAA curve.
As the RIAA curve was essentially an American standard, it had little impact outside the USA until the late 1970s when European recording labels began to adopt the RIAA equalization. It was even later when some Asian recording labels adopted the RIAA standard. In 1989, many Eastern European recording labels and Russian recording labels such as Melodiya were still using their own CCIR equalization. Hence the RIAA curve did not truly become a global standard until the late 1980s.
Further, even after officially agreeing to implement the RIAA equalization curve, many recording labels continued to use their own proprietary equalization even well into the 1970s. Columbia is one such prominent example in the USA, as are Decca, Teldec and Deutsche Grammophon in Europe.
Overall sound fidelity of records produced acoustically using horns instead of microphones had a distant, hollow tone quality. Some voices and instruments recorded better than others; Enrico Caruso, a famous tenor, was one popular recording artist of the acoustic era whose voice was well matched to the recording horn. It has been asked, "Did Caruso make the phonograph, or did the phonograph make Caruso? ''
Delicate sounds and fine overtones were mostly lost, because it took a lot of sound energy to vibrate the recording horn diaphragm and cutting mechanism. There were acoustic limitations due to mechanical resonances in both the recording and playback system. Some pictures of acoustic recording sessions show horns wrapped with tape to help mute these resonances. Even an acoustic recording played back electrically on modern equipment sounds like it was recorded through a horn, notwithstanding a reduction in distortion because of the modern playback. Toward the end of the acoustic era, there were many fine examples of recordings made with horns.
Electric recording which developed during the time that early radio was becoming popular (1925) benefited from the microphones and amplifiers used in radio studios. The early electric recordings were reminiscent tonally of acoustic recordings, except there was more recorded bass and treble as well as delicate sounds and overtones cut on the records. This was in spite of some carbon microphones used, which had resonances that colored the recorded tone. The double button carbon microphone with stretched diaphragm was a marked improvement. Alternatively, the Wente style condenser microphone used with the Western Electric licensed recording method had a brilliant midrange and was prone to overloading from sibilants in speech, but generally it gave more accurate reproduction than carbon microphones.
It was not unusual for electric recordings to be played back on acoustic phonographs. The Victor Orthophonic phonograph was a prime example where such playback was expected. In the Orthophonic, which benefited from telephone research, the mechanical pickup head was redesigned with lower resonance than the traditional mica type. Also, a folded horn with an exponential taper was constructed inside the cabinet to provide better impedance matching to the air. As a result, playback of an Orthophonic record sounded like it was coming from a radio.
Eventually, when it was more common for electric recordings to be played back electrically in the 1930s and 1940s, the overall tone was much like listening to a radio of the era. Magnetic pickups became more common and were better designed as time went on, making it possible to improve the damping of spurious resonances. Crystal pickups were also introduced as lower cost alternatives. The dynamic or moving coil microphone was introduced around 1930 and the velocity or ribbon microphone in 1932. Both of these high quality microphones became widespread in motion picture, radio, recording, and public address applications.
Over time, fidelity, dynamic and noise levels improved to the point that it was harder to tell the difference between a live performance in the studio and the recorded version. This was especially true after the invention of the variable reluctance magnetic pickup cartridge by General Electric in the 1940s when high quality cuts were played on well - designed audio systems. The Capehart radio / phonographs of the era with large diameter electrodynamic loudspeakers, though not ideal, demonstrated this quite well with "home recordings '' readily available in the music stores for the public to buy.
There were important quality advances in recordings specifically made for radio broadcast. In the early 1930s Bell Telephone Laboratories and Western Electric announced the total reinvention of disc recording: the Western Electric Wide Range System, "The New Voice of Action ''. The intent of the new Western Electric system was to improve the overall quality of disc recording and playback. The recording speed was 33 ⁄ rpm, originally used in the Western Electric / ERPI movie audio disc system implemented in the early Warner Brothers ' Vitaphone "talkies '' of 1927.
The newly invented Western Electric moving coil or dynamic microphone was part of the Wide Range System. It had a flatter audio response than the old style Wente condenser type and did n't require electronics installed in the microphone housing. Signals fed to the cutting head were pre-emphasized in the treble region to help override noise in playback. Groove cuts in the vertical plane were employed rather than the usual lateral cuts. The chief advantage claimed was more grooves per inch that could be crowded together, resulting in longer playback time. Additionally, the problem of inner groove distortion, which plagued lateral cuts, could be avoided with the vertical cut system. Wax masters were made by flowing heated wax over a hot metal disc thus avoiding the microscopic irregularities of cast blocks of wax and the necessity of planing and polishing.
Vinyl pressings were made with stampers from master cuts that were electroplated in vacuo by means of gold sputtering. Audio response was claimed out to 8,000 Hz, later 13,000 Hz, using light weight pickups employing jeweled styli. Amplifiers and cutters both using negative feedback were employed thereby improving the range of frequencies cut and lowering distortion levels. Radio transcription producers such as World Broadcasting System and Associated Music Publishers (AMP) were the dominant licensees of the Western Electric wide range system and towards the end of the 1930s were responsible for two - thirds of the total radio transcription business. These recordings use a bass turnover of 300 Hz and a 10,000 Hz rolloff of − 8.5 dB.
Developmentally, much of the technology of the long playing record, successfully released by Columbia in 1948, came from wide range radio transcription practices. The use of vinyl pressings, increased length of programming, and general improvement in audio quality over 78 rpm records were the major selling points.
The complete technical disclosure of the Columbia LP by Peter C. Goldmark, Rene ' Snepvangers and William S. Bachman in 1949 made it possible for a great variety of record companies to get into the business of making long playing records. The business grew quickly and interest spread in high fidelity sound and the do - it - yourself market for pickups, turntables, amplifier kits, loudspeaker enclosure plans, and AM / FM radio tuners. The LP record for longer works, 45 rpm for pop music, and FM radio became high fidelity program sources in demand. Radio listeners heard recordings broadcast and this in turn generated more record sales. The industry flourished.
Technology used in making recordings also developed and prospered. There were ten major evolutionary steps that improved LP production and quality during a period of approximately forty years.
At the time of the introduction of the compact disc (CD) in 1982, the stereo LP pressed in vinyl was at the high point of its development. Still, it continued to suffer from a variety of limitations:
Audiophiles have differed over the relative merits of the LP versus the CD since the digital disc was introduced. Vinyl records are still prized by some for their reproduction of analog recordings, despite digital being more accurate in reproducing an analog or digital recording. The LP 's drawbacks, however, include surface noise, less resolution due to a lower Signal to Noise ratio and dynamic range, stereo crosstalk, tracking error, pitch variations and greater sensitivity to handling. Modern anti-aliasing filters and oversampling systems used in digital recordings have eliminated perceived problems observed with very early CD players.
There is a theory that vinyl records can audibly represent higher frequencies than compact discs, though most of this is noise and not relevant to human hearing. According to Red Book specifications, the compact disc has a frequency response of 20 Hz up to 22,050 Hz, and most CD players measure flat within a fraction of a decibel from at least 0 Hz to 20 kHz at full output. Due to the distance required between grooves, it is not possible for an LP to reproduce as low frequencies as a CD. Additionally, turntable rumble and acoustic feedback obscures the low - end limit of vinyl but the upper end can be, with some cartridges, reasonably flat within a few decibels to 30 kHz, with gentle roll - off. Carrier signals of Quad LPs popular in the 1970s were at 30 kHz to be out of the range of human hearing. The average human auditory system is sensitive to frequencies from 20 Hz to a maximum of around 20,000 Hz. The upper and lower frequency limits of human hearing vary per person. High frequency sensitivity decreases as a person ages, a process called presbycusis. By contrast, hearing damage from loud noise exposure typically makes it more difficult to hear lower frequencies, such as three kHz through six kHz.
For the first several decades of disc record manufacturing, sound was recorded directly on to the "master disc '' at the recording studio. From about 1950 on (earlier for some large record companies, later for some small ones) it became usual to have the performance first recorded on audio tape, which could then be processed or edited, and then dubbed on to the master disc. A record cutter would engrave the grooves into the master disc. Early versions of these master discs were soft wax, and later a harder lacquer was used. The mastering process was originally something of an art as the operator had to manually allow for the changes in sound which affected how wide the space for the groove needed to be on each rotation.
As the playing of gramophone records causes gradual degradation of the recording, they are best preserved by transferring them onto other media and playing the records as rarely as possible. They need to be stored on edge, and do best under environmental conditions that most humans would find comfortable. The equipment for playback of certain formats (e.g. 16 ⁄ and 78 rpm) is manufactured only in small quantities, leading to increased difficulty in finding equipment to play the recordings.
Where old disc recordings are considered to be of artistic or historic interest, from before the era of tape or where no tape master exists, archivists play back the disc on suitable equipment and record the result, typically onto a digital format, which can be copied and manipulated to remove analog flaws without any further damage to the source recording. For example, Nimbus Records uses a specially built horn record player to transfer 78s. Anyone can do this using a standard record player with a suitable pickup, a phono - preamp (pre-amplifier) and a typical personal computer. However, for accurate transfer, professional archivists carefully choose the correct stylus shape and diameter, tracking weight, equalisation curve and other playback parameters and use high - quality analogue - to - digital converters.
As an alternative to playback with a stylus, a recording can be read optically, processed with software that calculates the velocity that the stylus would be moving in the mapped grooves and converted to a digital recording format. This does no further damage to the disc and generally produces a better sound than normal playback. This technique also has the potential to allow for reconstruction of broken or otherwise damaged discs.
Groove recordings, first designed in the final quarter of the 19th century, held a predominant position for nearly a century -- withstanding competition from reel - to - reel tape, the 8 - track cartridge, and the compact cassette. In 1988, the compact disc surpassed the gramophone record in unit sales. Vinyl records experienced a sudden decline in popularity between 1988 and 1991, when the major label distributors restricted their return policies, which retailers had been relying on to maintain and swap out stocks of relatively unpopular titles. First the distributors began charging retailers more for new product if they returned unsold vinyl, and then they stopped providing any credit at all for returns. Retailers, fearing they would be stuck with anything they ordered, only ordered proven, popular titles that they knew would sell, and devoted more shelf space to CDs and cassettes. Record companies also deleted many vinyl titles from production and distribution, further undermining the availability of the format and leading to the closure of pressing plants. This rapid decline in the availability of records accelerated the format 's decline in popularity, and is seen by some as a deliberate ploy to make consumers switch to CDs, which unlike today, were more profitable for the record companies.
In spite of their flaws, such as the lack of portability, records still have enthusiastic supporters. Vinyl records continue to be manufactured and sold today, especially by independent rock bands and labels, although record sales are considered to be a niche market composed of audiophiles, collectors, and DJs. Old records and out - of - print recordings in particular are in much demand by collectors the world over. (See Record collecting.) Many popular new albums are given releases on vinyl records and older albums are also given reissues, sometimes on audiophile - grade vinyl.
In the United Kingdom, the popularity of indie rock caused sales of new vinyl records (particularly 7 inch singles) to increase significantly in 2006, briefly reversing the downward trend seen during the 1990s.
In the United States, annual vinyl sales increased by 85.8 % between 2006 and 2007, albeit off a low base, and by 89 % between 2007 and 2008. However, sales increases have moderated over recent years falling to less than 10 % during 2017.
Many electronic dance music and hip hop releases today are still preferred on vinyl; however, digital copies are still widely available. This is because for disc jockeys ("DJs ''), vinyl has an advantage over the CD: direct manipulation of the medium. DJ techniques such as slip - cueing, beatmatching, and scratching originated on turntables. With CDs or compact audio cassettes one normally has only indirect manipulation options, e.g., the play, stop, and pause buttons. With a record one can place the stylus a few grooves farther in or out, accelerate or decelerate the turntable, or even reverse its direction, provided the stylus, record player, and record itself are built to withstand it. However, many CDJ and DJ advances, such as DJ software and time - encoded vinyl, now have these capabilities and more.
Figures released in the United States in early 2009 showed that sales of vinyl albums nearly doubled in 2008, with 1.88 million sold -- up from just under 1 million in 2007. In 2009, 3.5 million units sold in the United States, including 3.2 million albums, the highest number since 1998.
Sales have continued to rise into the 2010s, with around 2.8 million sold in 2010, which is the most sales since record keeping began in 1991, when vinyl had been overshadowed by Compact Cassettes and compact discs.
In 2014 artist Jack White sold 40,000 copies of his second solo release, Lazaretto, on vinyl. The sales of the record beat the largest sales in one week on vinyl since 1991. The sales record was previously held by Pearl Jam 's, Vitalogy, which sold 34,000 copies in one week in 1994. In 2014, the sale of vinyl records was the only physical music medium with increasing sales with relation to the previous year. Sales of other mediums including individual digital tracks, digital albums and compact discs have fallen, the latter having the greatest drop - in - sales rate.
In 2011, the Entertainment Retailers Association in the United Kingdom found that consumers were willing to pay on average £ 16.30 (€ 19.37, US $25.81) for a single vinyl record, as opposed to £ 7.82 (€ 9.30, US $12.38) for a CD and £ 6.80 (€ 8.09, US $10.76) for a digital download. In the United States, new vinyl releases often have a larger profit margin (individual item) than do releases on CD or digital downloads (in many cases), as the latter formats quickly go down in price.
In 2015 the sales of vinyl records went up 32 %, to $416 million, their highest level since 1988. There were 31.5 million vinyl records sold in 2015, and the number has increased annually ever since 2006.
|
what is the function of a vacuum breaker | Vacuum breaker - wikipedia
A vacuum breaker is an attachment commonly placed on a bibcock valve or toilet or urinal flush valve, that prevents water from being siphoned backward into the public drinking water system. This prevents contamination should the public drinking water system 's pressure drop.
A vacuum breaker typically contains a plastic disc that is pressed forward by water supply pressure, and covers small vent holes. Should the supply pressure drop, the disc springs back opening the vent holes (which let in outside air), and preventing backflow of water.
A more complex valve that accomplishes much the same purpose is the backflow preventer.
Vacuum relief valves are sometimes known as vacuum breakers.
|
when did the deepwater horizon oil spill happen | Deepwater Horizon oil spill - wikipedia
The Deepwater Horizon oil spill (also referred to as the BP oil spill, the BP oil disaster, the Gulf of Mexico oil spill, and the Macondo blowout) is an industrial disaster that began on 20 April 2010, in the Gulf of Mexico on the BP - operated Ma, it is considered the largest marine oil spill in the history of the petroleum industry and estimated to be 8 % to 31 % larger in volume than the previous largest, the Ixtoc I oil spill. The U.S. government estimated the total discharge at 4.9 million barrels (210 million US gal; 780,000 m). After several failed efforts to contain the flow, the well was declared sealed on 19 September 2010. Reports in early 2012 indicated that the well site was still leaking.
A massive response ensued to protect beaches, wetlands and estuaries from the spreading oil utilizing skimmer ships, floating booms, controlled burns and 1.84 million US gallons (7,000 m) of oil dispersant. Due to the months - long spill, along with adverse effects from the response and cleanup activities, extensive damage to marine and wildlife habitats and fishing and tourism industries was reported. In Louisiana, 4,900,000 pounds (2,200 t) of oily material was removed from the beaches in 2013, over double the amount collected in 2012. Oil cleanup crews worked four days a week on 55 miles (89 km) of Louisiana shoreline throughout 2013. Oil continued to be found as far from the Macondo site as the waters off the Florida Panhandle and Tampa Bay, where scientists said the oil and dispersant mixture is embedded in the sand. In April 2013, it was reported that dolphins and other marine life continued to die in record numbers with infant dolphins dying at six times the normal rate. One study released in 2014 reported that tuna and amberjack that were exposed to oil from the spill developed deformities of the heart and other organs that would be expected to be fatal or at least life - shortening and another study found that cardiotoxicity might have been widespread in animal life exposed to the spill.
Numerous investigations explored the causes of the explosion and record - setting spill. The U.S. government September 2011 report pointed to defective cement on the well, faulting mostly BP, but also rig operator Transocean and contractor Halliburton. Earlier in 2011, a White House commission likewise blamed BP and its partners for a series of cost - cutting decisions and an inadequate safety system, but also concluded that the spill resulted from "systemic '' root causes and "absent significant reform in both industry practices and government policies, might well recur ''.
In November 2012, BP and the United States Department of Justice settled federal criminal charges with BP pleading guilty to 11 counts of manslaughter, two misdemeanors, and a felony count of lying to Congress. BP also agreed to four years of government monitoring of its safety practices and ethics, and the Environmental Protection Agency announced that BP would be temporarily banned from new contracts with the US government. BP and the Department of Justice agreed to a record - setting $4.525 billion in fines and other payments. As of February 2013, criminal and civil settlements and payments to a trust fund had cost the company $42.2 billion.
In September 2014, a U.S. District Court judge ruled that BP was primarily responsible for the oil spill because of its gross negligence and reckless conduct. In July 2015, BP agreed to pay $18.7 billion in fines, the largest corporate settlement in U.S. history.
The Deepwater Horizon was a 10 - year - old semi-submersible, mobile, floating, dynamically positioned drilling rig that could operate in waters up to 10,000 feet (3,000 m) deep. Built by South Korean company Hyundai Heavy Industries and owned by Transocean, the rig operated under the Marshallese flag of convenience, and was chartered to BP from March 2008 to September 2013. It was drilling a deep exploratory well, 18,360 feet (5,600 m) below sea level, in approximately 5,100 feet (1,600 m) of water. The well is situated in the Macondo Prospect in Mississippi Canyon Block 252 (MC252) of the Gulf of Mexico, in the United States ' exclusive economic zone. The Macondo well is found roughly 41 miles (66 km) off the Louisiana coast. BP was the operator and principal developer of the Macondo Prospect with a 65 % share, while 25 % was owned by Anadarko Petroleum Corporation, and 10 % by MOEX Offshore 2007, a unit of Mitsui.
At approximately 9: 45 pm CDT, on 20 April 2010, high - pressure methane gas from the well expanded into the drilling riser and rose into the drilling rig, where it ignited and exploded, engulfing the platform. At the time, 126 crew members were on board: seven BP employees, 79 of Transocean, and employees of various other companies. Eleven missing workers were never found despite a three - day U.S. Coast Guard (USCG) search operation and are believed to have died in the explosion. Ninety - four crew members were rescued by lifeboat or helicopter, 17 of whom were treated for injuries. The Deepwater Horizon sank on the morning of 22 April 2010.
The oil leak was discovered on the afternoon of 22 April 2010 when a large oil slick began to spread at the former rig site. The oil flowed for 87 days. BP originally estimated a flow rate of 1,000 to 5,000 barrels per day (160 to 790 m / d). The Flow Rate Technical Group (FRTG) estimated the initial flow rate was 62,000 barrels per day (9,900 m / d). The total estimated volume of leaked oil approximated 4.9 million barrels (210 million US gal; 780,000 m) with plus or minus 10 % uncertainty, including oil that was collected, making it the world 's largest accidental spill. BP challenged the higher figure, saying that the government overestimated the volume. Internal emails released in 2013 showed that one BP employee had estimates that matched those of the FRTG, and shared the data with supervisors, but BP continued with their lower number. The company argued that government figures do not reflect over 810,000 barrels (34 million US gal; 129,000 m) of oil that was collected or burned before it could enter the Gulf waters.
According to the satellite images, the spill directly impacted 68,000 square miles (180,000 km) of ocean, which is comparable to the size of Oklahoma. By early June 2010, oil had washed up on 125 miles (201 km) of Louisiana 's coast and along the Mississippi, Florida, and Alabama coastlines. Oil sludge appeared in the Intracoastal Waterway and on Pensacola Beach and the Gulf Islands National Seashore. In late June, oil reached Gulf Park Estates, its first appearance in Mississippi. In July, tar balls reached Grand Isle and the shores of Lake Pontchartrain. In September a new wave of oil suddenly coated 16 miles (26 km) of Louisiana coastline and marshes west of the Mississippi River in Plaquemines Parish. In October, weathered oil reached Texas. As of July 2011, about 491 miles (790 km) of coastline in Louisiana, Mississippi, Alabama and Florida were contaminated by oil and a total of 1,074 miles (1,728 km) had been oiled since the spill began. As of December 2012, 339 miles (546 km) of coastline remain subject to evaluation and / or cleanup operations.
Concerns were raised about the appearance of underwater, horizontally extended plumes of dissolved oil. Researchers concluded that deep plumes of dissolved oil and gas would likely remain confined to the northern Gulf of Mexico and that the peak impact on dissolved oxygen would be delayed and long lasting. Two weeks after the wellhead was capped on 15 July 2010, the surface oil appeared to have dissipated, while an unknown amount of subsurface oil remained. Estimates of the residual ranged from a 2010 NOAA report that claimed about half of the oil remained below the surface to independent estimates of up to 75 %.
That means that over 100 million US gallons (380 Ml) (2.4 million barrels) remained in the Gulf. As of January 2011, tar balls, oil sheen trails, fouled wetlands marsh grass and coastal sands were still evident. Subsurface oil remained offshore and in fine silts. In April 2012, oil was still found along as much as 200 miles (320 km) of Louisiana coastline and tar balls continued to wash up on the barrier islands. In 2013, some scientists at the Gulf of Mexico Oil Spill and Ecosystem Science Conference said that as much as one - third of the oil may have mixed with deep ocean sediments, where it risks damage to ecosystems and commercial fisheries.
In 2013, more than 4,600,000 pounds (2,100 t) of "oiled material '' was removed from the Louisiana coast. Although only "minute '' quantities of oil continued to wash up in 2013, patches of tar balls were still being reported almost every day from Alabama and Florida Panhandle beaches. Regular cleanup patrols were no longer considered justified but cleanup was being conducted on an as - needed basis, in response to public reports.
It was first thought that oil had not reached as far as Tampa Bay, Florida; however, a study done in 2013 found that one of the plumes of dispersant - treated oil had reached a shelf 80 miles (130 km) off the Tampa Bay region. According to researchers, there is "some evidence it may have caused lesions in fish caught in that area ''.
First BP unsuccessfully attempted to close the blowout preventer valves on the wellhead with remotely operated underwater vehicles. Next it placed a 125 - tonne (280,000 lb) containment dome over the largest leak and piped the oil to a storage vessel. While this technique had worked in shallower water, it failed here when gas combined with cold water to form methane hydrate crystals that blocked the opening at the top of the dome. Pumping heavy drilling fluids into the blowout preventer to restrict the flow of oil before sealing it permanently with cement ("top kill '') also failed.
BP then inserted a riser insertion tube into the pipe and a stopper - like washer around the tube plugged at the end of the riser and diverted the flow into the insertion tube. The collected gas was flared and oil stored on board the drillship Discoverer Enterprise. Before the tube was removed, it collected 924,000 US gallons (22,000 bbl; 3,500 m) of oil. On 3 June 2010, BP removed the damaged drilling riser from the top of the blowout preventer and covered the pipe by the cap which connected it to another riser. On 16 June a second containment system connected directly to the blowout preventer began carrying oil and gas to service vessels, where it was consumed in a clean - burning system. The United States government 's estimates suggested the cap and other equipment were capturing less than half of the leaking oil. On 10 July the containment cap was removed to replace it with a better - fitting cap ("Top Hat Number 10 ''). Mud and cement were later pumped in through the top of the well to reduce the pressure inside it which did n't work either. A final device was created to attach a chamber of larger diameter than the flowing pipe with a flange that bolted to the top of the blowout preventer and a manual valve set to close off the flow once attached. On 15 July the device was secured and time was taken closing the valves to ensure the attachment under increasing pressure until the valves were closed completing the temporary measures.
In mid-May, United States Secretary of Energy Steven Chu assembled a team of nuclear physicists, including hydrogen bomb designer Richard Garwin and Sandia National Laboratories director Tom Hunter. Oil expert Matthew Simmons maintained that a nuclear explosion was the only way BP could permanently seal the well and cited successful Soviet attempts to seal off runaway gas wells with nuclear blasts. A spokesperson for the U.S. Energy Department said that "neither Energy Secretary Steven Chu nor anyone else '' ever considered this option. On 24 May BP ruled out conventional explosives, claiming that if blasts failed to clog the well, "we would have denied ourselves all other options. ''
Transocean 's Development Driller III started drilling a first relief well on 2 May 2010. GSF Development Driller II started drilling a second relief on 16 May 2010. On 3 August 2010, first test oil and then drilling mud was pumped at a slow rate of approximately 2 barrels (320 L) per minute into the well - head. Pumping continued for eight hours, at the end of which time the well was declared to be "in a static condition. '' On 4 August 2010, BP began pumping cement from the top, sealing that part of the flow channel permanently.
On 3 September 2010, the 300 - ton failed blowout preventer was removed from the well and a replacement blowout preventer was installed. On 16 September 2010, the relief well reached its destination and pumping of cement to seal the well began. On 19 September 2010, National Incident Commander Thad Allen declared the well "effectively dead '' and said that it posed no further threat to the Gulf.
In May 2010, BP admitted they had "discovered things that were broken in the sub-surface '' during the "top kill '' effort.
Oil slicks were reported in March and August 2011, in March and October 2012, and in January 2013. Repeated scientific analyses confirmed that the sheen was a chemical match for oil from Macondo well.
The USCG initially said the oil was too dispersed to recover and posed no threat to the coastline, but later warned BP and Transocean that they might be held financially responsible for cleaning up the new oil. USGS director Marcia McNutt stated that the riser pipe could hold at most 1,000 barrels (160 m) because it is open on both ends, making it unlikely to hold the amount of oil being observed.
In October 2012, BP reported that they had found and plugged leaking oil from the failed containment dome, now abandoned about 1,500 feet (460 m) from the main well. In December 2012, the USCG conducted a subsea survey; no oil coming from the wells or the wreckage was found and its source remains unknown. In addition, white, milky substance was observed seeping from the wreckage. According to BP and the USCG, it is "not oil and it 's not harmful. ''
In January 2013, BP said that they were continuing to investigate possible sources of the oil sheen. Chemical data implied that the substance might be residual oil leaking from the wreckage. If that proves to be the case, the sheen can be expected to eventually disappear. Another possibility is that it is formation oil escaping from the subsurface, using the Macondo well casing as flow conduit, possibly intersecting a naturally occurring fault, and then following that to escape at the surface some distance from the wellhead. If it proves to be oil from the subsurface, then that could indicate the possibility of an indefinite release of oil. The oil slick was comparable in size to naturally occurring oil seeps and was not large enough to pose an immediate threat to wildlife.
The fundamental strategies for addressing the spill were containment, dispersal and removal. In summer 2010, approximately 47,000 people and 7,000 vessels were involved in the project. By 3 October 2012, federal response costs amounted to $850 million, mostly reimbursed by BP. As of January 2013, 935 personnel were still involved. By that time cleanup had cost BP over $14 billion.
It was estimated with plus - or - minus 10 % uncertainty that 4.9 million barrels (780,000 m) of oil was released from the well; 4.1 million barrels (650,000 m) of oil went into the Gulf. The report led by the Department of the Interior and the NOAA said that "75 % (of oil) has been cleaned up by Man or Mother Nature ''; however, only about 25 % of released oil was collected or removed while about 75 % of oil remained in the environment in one form or another. In 2012, Markus Huettel, a benthic ecologist at Florida State University, maintained that while much of BP 's oil was degraded or evaporated, at least 60 % remains unaccounted for.
In May 2010, a local native set up a network for people to volunteer their assistance in cleaning up beaches. Boat captains were given the opportunity to offer the use of their boat to help clean and prevent the oil from further spreading. To assist with the efforts the captains had to register their ships with the Vessels of Opportunity, however an issue arose when more boats registered than actually participated in the clean up efforts - only a third of the registered boats. Many local supporters were disappointed with BP 's slow response, prompting the formation of The Florida Key Environmental Coalition. This coalition gained significant influence in the clean up of the oil spill to try and gain some control over the situation.
Containment booms stretching over 4,200,000 feet (1,300 km) were deployed, either to corral the oil or as barriers to protect marshes, mangroves, shrimp / crab / oyster ranches or other ecologically sensitive areas. Booms extend 18 -- 48 inches (0.46 -- 1.22 m) above and below the water surface and were effective only in relatively calm and slow - moving waters. Including one - time use sorbent booms, a total of 13,300,000 feet (4,100 km) of booms were deployed. Booms were criticized for washing up on the shore with the oil, allowing oil to escape above or below the boom, and for ineffectiveness in more than three to four - foot (90 -- 120 cm) waves.
The Louisiana barrier island plan was developed to construct barrier islands to protect the coast of Louisiana. The plan was criticised for its expense and poor results. Critics allege that the decision to pursue the project was political with little scientific input. The EPA expressed concern that the booms would threaten wildlife.
The spill was also notable for the volume of Corexit oil dispersant used and for application methods that were "purely experimental. '' Altogether, 1.84 million US gallons (7,000 m) of dispersants were used; of this 771,000 US gallons (2,920 m) were released at the wellhead. Subsea injection had never previously been tried but due to the spill 's unprecedented nature BP together with USCG and EPA decided to use it. Over 400 sorties were flown to release the product. Although usage of dispersants was described as "the most effective and fast moving tool for minimizing shoreline impact '', the approach continues to be investigated.
A 2011 analysis conducted by Earthjustice and Toxipedia showed that the dispersant could contain cancer - causing agents, hazardous toxins and endocrine - disrupting chemicals. Environmental scientists expressed concerns that the dispersants add to the toxicity of a spill, increasing the threat to sea turtles and bluefin tuna. The dangers are even greater when poured into the source of a spill, because they are picked up by the current and wash through the Gulf. According to BP and federal officials, dispersant use stopped after the cap was in place; however, marine toxicologist Riki Ott wrote in an open letter to the EPA that Corexit use continued after that date and a GAP investigation stated that "(a) majority of GAP witnesses cited indications that Corexit was used after (July 2010). ''
According to a NALCO manual obtained by GAP, Corexit 9527 is an "eye and skin irritant. Repeated or excessive exposure... may cause injury to red blood cells (hemolysis), kidney or the liver. '' The manual adds: "Excessive exposure may cause central nervous system effects, nausea, vomiting, anesthetic or narcotic effects. '' It advises, "Do not get in eyes, on skin, on clothing, '' and "Wear suitable protective clothing. '' For Corexit 9500 the manual advised, "Do not get in eyes, on skin, on clothing, '' "Avoid breathing vapor, '' and "Wear suitable protective clothing. '' According to FOIA requests obtained by GAP, neither the protective gear nor the manual were distributed to Gulf oil spill cleanup workers.
Corexit EC9500A and Corexit EC9527A were the principal variants. The two formulations are neither the least toxic, nor the most effective, among EPA 's approved dispersants, but BP said it chose to use Corexit because it was available the week of the rig explosion. On 19 May, the EPA gave BP 24 hours to choose less toxic alternatives to Corexit from the National Contingency Plan Product Schedule, and begin applying them within 72 hours of EPA approval or provide a detailed reasoning why no approved products met the standards. On 20 May, BP determined that none of the alternative products met all three criteria of availability, non-toxicity and effectiveness. On 24 May, EPA Administrator Lisa P. Jackson ordered EPA to conduct its own evaluation of alternatives and ordered BP to reduce dispersant use by 75 %. BP reduced Corexit use by 25,689 to 23,250 US gallons (97,240 to 88,010 L) per day, a 9 % decline. On 2 August 2010, the EPA said dispersants did no more harm to the environment than the oil and that they stopped a large amount of oil from reaching the coast by breaking it down faster. However, some independent scientists and EPA 's own experts continue to voice concerns about the approach.
Underwater injection of Corexit into the leak may have created the oil plumes which were discovered below the surface. Because the dispersants were applied at depth, much of the oil never rose to the surface. One plume was 22 miles (35 km) long, more than 1 mile (1,600 m) wide and 650 feet (200 m) deep. In a major study on the plume, experts were most concerned about the slow pace at which the oil was breaking down in the cold, 40 ° F (4 ° C) water at depths of 3,000 feet (900 m).
In late 2012, a study from Georgia Tech and Universidad Autonoma de Aguascalientes in Environmental Pollution journal reported that Corexit used during the BP oil spill had increased the toxicity of the oil by 52 times. The scientists concluded that "Mixing oil with dispersant increased toxicity to ecosystems '' and made the gulf oil spill worse. ''
The three basic approaches for removing the oil from the water were: combustion, offshore filtration, and collection for later processing. USCG said 33 million US gallons (120,000 m) of tainted water was recovered, including 5 million US gallons (19,000 m) of oil. BP said 826,800 barrels (131,450 m) had been recovered or flared. It is calculated that about 5 % of leaked oil was burned at the surface and 3 % was skimmed. On the most demanding day 47,849 people were assigned on the response works.
From April to mid-July 2010, 411 controlled in - situ fires remediated approximately 265,000 barrels (11.1 million US gal; 42,100 m). The fires released small amounts of toxins, including cancer - causing dioxins. According to EPA 's report, the released amount is not enough to pose an added cancer risk to workers and coastal residents, while a second research team concluded that there was only a small added risk.
Oil was collected from water by using skimmers. In total 2,063 various skimmers were used. For offshore, more than 60 open - water skimmers were deployed, including 12 purpose - built vehicles. EPA regulations prohibited skimmers that left more than 15 parts per million (ppm) of oil in the water. Many large - scale skimmers exceeded the limit. Due to use of Corexit the oil was too dispersed to collect, according to a spokesperson for shipowner TMT. In mid-June 2010, BP ordered 32 machines that separate oil and water, with each machine capable of extracting up to 2,000 barrels per day (320 m / d). After one week of testing, BP began to proceed and by 28 June, had removed 890,000 barrels (141,000 m).
After the well was captured, the cleanup of shore became the main task of the response works. Two main types of affected coast were sandy beaches and marshes. On beaches the main techniques were sifting sand, removing tar balls, and digging out tar mats manually or by using mechanical devices. For marshes, techniques such as vacuum and pumping, low - pressure flush, vegetation cutting, and bioremediation were used.
Dispersants are said to facilitate the digestion of the oil by microbes. Mixing dispersants with oil at the wellhead would keep some oil below the surface and in theory, allowing microbes to digest the oil before it reached the surface. Various risks were identified and evaluated, in particular that an increase in microbial activity might reduce subsea oxygen levels, threatening fish and other animals.
Several studies suggest that microbes successfully consumed part of the oil. By mid-September, other research claimed that microbes mainly digested natural gas rather than oil. David L. Valentine, a professor of microbial geochemistry at UC Santa Barbara, said that the capability of microbes to break down the leaked oil had been greatly exaggerated. However, biogeochemist Chris Reddy, said natural microorganisms are a big reason why the oil spill in the Gulf of Mexico was not far worse.
Genetically modified Alcanivorax borkumensis was added to the waters to speed digestion. The delivery method of microbes to oil patches was proposed by the Russian Research and Development Institute of Ecology and the Sustainable Use of Natural Resources.
On 18 May 2010, BP was designated the lead "Responsible Party '' under the Oil Pollution Act of 1990, which meant that BP had operational authority in coordinating the response.
The first video images were released on 12 May, and further video images were released by members of Congress who had been given access to them by BP.
During the spill response operations, at the request of the Coast Guard, the Federal Aviation Administration (FAA) implemented a 900 - square - mile (2,300 km) temporary flight restriction zone over the operations area. Restrictions were to prevent civilian air traffic from interfering with aircraft assisting the response effort. All flights in the operations ' area were prohibited except flight authorized by air traffic control; routine flights supporting offshore oil operations; federal, state, local and military flight operations supporting spill response; and air ambulance and law enforcement operations. Exceptions for these restrictions were granted on a case - by - case basis dependent on safety issues, operational requirements, weather conditions, and traffic volume. No flights, except aircraft conducting aerial chemical dispersing operations, or for landing and takeoff, were allowed below 1,000 metres (3,300 ft). Notwithstanding restrictions, there were 800 to 1,000 flights per day during the operations.
Local and federal authorities citing BP 's authority denied access to members of the press attempting to document the spill from the air, from boats, and on the ground, blocking access to areas that were open to the public. In some cases photographers were granted access only with BP officials escorting them on BP - contracted boats and aircraft. In one example, the U.S. Coast Guard stopped Jean - Michel Cousteau 's boat and allowed it to proceed only after the Coast Guard was assured that no journalists were on board. In another example, a CBS News crew was denied access to the oil - covered beaches of the spill area. The CBS crew was told by the authorities: "this is BP 's rules, not ours, '' when trying to film the area. Some members of Congress criticized the restrictions placed on access by journalists.
The FAA denied that BP employees or contractors made decisions on flights and access, saying those decisions were made by the FAA and Coast Guard. The FAA acknowledged that media access was limited to hired planes or helicopters, but was arranged through the Coast Guard. The Coast Guard and BP denied having a policy of restricting journalists; they noted that members of the media had been embedded with the authorities and allowed to cover response efforts since the beginning of the effort, with more than 400 embeds aboard boats and aircraft to date. They also said that they wanted to provide access to the information while maintaining safety.
On 15 April 2014, BP claimed that cleanup along the coast was substantially complete, but the United States Coast Guard responded by stating that a lot of work remained. Using physical barriers such as floating booms, cleanup workers ' objective was to keep the oil from spreading any further. They used skimmer boats to remove a majority of the oil and they used sorbents to absorb any remnant of oil like a sponge. Although that method did not remove the oil completely, chemicals called dispersants are used to hasten the oil 's degradation to prevent the oil from doing further damage to the marine habitats below the surface water. For the Deep Horizon oil spill, cleanup workers used 1,400,000 US gallons (5,300,000 l; 1,200,000 imp gal) of various chemical dispersants to further breakdown the oil.
The State of Louisiana was funded by BP to do regular testing of fish, shellfish, water, and sand. Initial testing regularly showed detectable levels of dioctyl sodium sulfosuccinate, a chemical used in the clean up. Testing over the past year reported by GulfSource.org, for the pollutants tested have not produced results.
The spill area hosts 8,332 species, including more than 1,270 fish, 604 polychaetes, 218 birds, 1,456 mollusks, 1,503 crustaceans, 4 sea turtles and 29 marine mammals. Between May and June 2010, the spill waters contained 40 times more polycyclic aromatic hydrocarbons (PAHs) than before the spill. PAHs are often linked to oil spills and include carcinogens and chemicals that pose various health risks to humans and marine life. The PAHs were most concentrated near the Louisiana Coast, but levels also jumped 2 -- 3 fold in areas off Alabama, Mississippi and Florida. PAHs can harm marine species directly and microbes used to consume the oil can reduce marine oxygen levels. The oil contained approximately 40 % methane by weight, compared to about 5 % found in typical oil deposits. Methane can potentially suffocate marine life and create "dead zones '' where oxygen is depleted.
A 2014 study of the effects of the oil spill on bluefin tuna funded by National Oceanic and Atmospheric Administration (NOAA), Stanford University, and the Monterey Bay Aquarium and published in the journal Science, found that the toxins from oil spills can cause irregular heartbeats leading to cardiac arrest. Calling the vicinity of the spill "one of the most productive ocean ecosystems in the world '', the study found that even at very low concentrations "PAH cardiotoxicity was potentially a common form of injury among a broad range of species in the vicinity of the oil. '' Another peer - reviewed study, released in March 2014 and conducted by 17 scientists from the United States and Australia and published in the Proceedings of the National Academy of Sciences, found that tuna and amberjack that were exposed to oil from the spill developed deformities of the heart and other organs that would be expected to be fatal or at least life - shortening. The scientists said that their findings would most likely apply to other large predator fish and "even to humans, whose developing hearts are in many ways similar. '' BP responded that the concentrations of oil in the study were a level rarely seen in the Gulf, but The New York Times reported that the BP statement was contradicted by the study.
The oil dispersant Corexit, previously only used as a surface application, was released underwater in unprecedented amounts, with the intent of making it more easily biodegraded by naturally occurring microbes. Thus, oil that would normally rise to the surface of the water was emulsified into tiny droplets and remained suspended in the water and on the sea floor. The oil and dispersant mixture permeated the food chain through zooplankton. Signs of an oil - and - dispersant mix were found under the shells of tiny blue crab larvae. A study of insect populations in the coastal marshes affected by the spill also found a significant impact. Chemicals from the spill were found in migratory birds as far away as Minnesota. Pelican eggs contained "petroleum compounds and Corexit ''. Dispersant and PAHs from oil are believed to have caused "disturbing numbers '' of mutated fish that scientists and commercial fishers saw in 2012, including 50 % of shrimp found lacking eyes and eye sockets. Fish with oozing sores and lesions were first noted by fishermen in November 2010. Prior to the spill, approximately 0.1 % of Gulf fish had lesions or sores. A report from the University of Florida said that many locations showed 20 % of fish with lesions, while later estimates reached 50 %. In October 2013, Al Jazeera reported that the gulf ecosystem was "in crisis '', citing a decline in seafood catches, as well as deformities and lesions found in fish.
In July 2010 it was reported that the spill was "already having a ' devastating ' effect on marine life in the Gulf ''. Damage to the ocean floor especially endangered the Louisiana pancake batfish whose range is entirely contained within the spill - affected area. In March 2012, a definitive link was found between the death of a Gulf coral community and the spill. According to NOAA, a cetacean Unusual Mortality Event (UME) has been recognized since before the spill began, NOAA is investigating possible contributing factors to the ongoing UME from the Deepwater Horizon spill, with the possibility of eventual criminal charges being filed if the spill is shown to be connected. Some estimates are that only 2 % of the carcasses of killed mammals have been recovered.
In the first birthing season for dolphins after the spill, dead baby dolphins washed up along Mississippi and Alabama shorelines at about 10 times the normal number. A peer - reviewed NOAA / BP study disclosed that nearly half the bottlenose dolphins tested in mid-2011 in Barataria Bay, a heavily oiled area, were in "guarded or worse '' condition, "including 17 percent that were not expected to survive ''. BP officials deny that the disease conditions are related to the spill, saying that dolphin deaths actually began being reported before the BP oil spill. By 2013, over 650 dolphins had been found stranded in the oil spill area, a four-fold increase over the historical average. The National Wildlife Federation (NWF) reports that sea turtles, mostly endangered Kemp 's ridley sea turtles, have been stranding at a high rate. Before the spill there were an average of 100 strandings per year; since the spill the number has jumped to roughly 500. NWF senior scientist Doug Inkley notes that the marine death rates are unprecedented and occurring high in the food chain, strongly suggesting there is "something amiss with the Gulf ecosystem ''. In December 2013, the journal Environmental Science & Technology published a study finding that of 32 dolphins briefly captured from 24 - km stretch near southeastern Louisiana, half were seriously ill or dying. BP said the report was "inconclusive as to any causation associated with the spill ''.
In 2012, tar balls continued to wash up along the Gulf coast and in 2013, tar balls could still be found in on the Mississippi and Louisiana coasts, along with oil sheens in marshes and signs of severe erosion of coastal islands, brought about by the death of trees and marsh grass from exposure to the oil. In 2013, former NASA physicist Bonny Schumaker noted a "dearth of marine life '' in a radius 30 to 50 miles (48 to 80 km) around the well, after flying over the area numerous times since May 2010.
In 2013, researchers found that oil on the bottom of the seafloor did not seem to be degrading, and observed a phenomenon called a "dirty blizzard '': oil in the water column began clumping around suspended sediments, and falling to the ocean floor in an "underwater rain of oily particles. '' The result could have long - term effects because oil could remain in the food chain for generations.
A 2014 bluefin tuna study in Science found that oil already broken down by wave action and chemical dispersants was more toxic than fresh oil. A 2015 study of the relative toxicity of oil and dispersants to coral also found that the dispersants were more toxic than the oil.
A 2015 study by the National Oceanic and Atmospheric Administration, published in PLOS ONE, links the sharp increase in dolphin deaths to the Deepwater Horizon oil spill.
On 12 April 2016, a research team reported that 88 percent of about 360 baby or stillborn dolphins within the spill area "had abnormal or under - developed lungs '', compared to 15 percent in other areas. The study was published in the April 2016 Diseases of Aquatic Organisms.
By June 2010, 143 spill - exposure cases had been reported to the Louisiana Department of Health and Hospitals; 108 of those involved workers in the clean - up efforts, while 35 were reported by residents. Chemicals from the oil and dispersant are believed to be the cause; it is believed that the addition of dispersants made the oil more toxic.
The United States Department of Health and Human Services set up the GuLF Study in June 2010 in response to these reports. The study is run by the National Institute of Environmental Health Sciences, and will last at least five years.
Mike Robicheux, a Louisiana physician, described the situation as "the biggest public health crisis from a chemical poisoning in the history of this country. '' In July, after testing the blood of BP cleanup workers and residents in Louisiana, Mississippi, Alabama, and Florida for volatile organic compounds, environmental scientist Wilma Subra said she was "finding amounts 5 to 10 times in excess of the 95th percentile ''; she said that "the presence of these chemicals in the blood indicates exposure. '' Riki Ott, a marine toxicologist with experience of the Exxon Valdez oil spill, advised families to evacuate the Gulf. She said that workers from the Valdez spill had suffered long - term health consequences.
Following the 26 May 2010 hospitalization of seven fishermen that were working in the cleanup crew, BP requested that the National Institute for Occupational Safety and Health perform a Health Hazard Evaluation. This was to cover all offshore cleanup activities, BP later requested a second NIOSH investigation of onshore cleanup operations. Tests for chemical exposure in the seven fishermen were negative; NIOSH concluded that the hospitalizations were most likely a result of heat, fatigue, and terpenes that were being used to clean the decks. Review of 10 later hospitalizations found that heat exposure and dehydration were consistent findings but could not establish chemical exposure. NIOSH personnel performed air monitoring around cleanup workers at sea, on land, and during the application of Corexit. Air concentrations of volatile organic compounds and PAHs never exceeded permissible exposure levels. A limitation of their methodology was that some VOCs may have already evaporated from the oil before they began their investigation. In their report, they suggest the possibility that respiratory symptoms might have been caused by high levels of ozone or reactive aldehydes in the air, possibly produced from photochemical reactions in the oil. NIOSH did note that many of the personnel involved were not donning personal protective equipment (gloves and impermeable coveralls) as they had been instructed to and emphasized that this was important protection against transdermal absorption of chemicals from the oil. Heat stress was found to be the most pressing safety concern.
Workers reported that they were not allowed to use respirators, and that their jobs were threatened if they did. OSHA said "cleanup workers are receiving "minimal '' exposure to airborne toxins... OSHA will require that BP provide certain protective clothing, but not respirators. '' ProPublica reported that workers were being photographed while working with no protective clothing. An independent investigation for Newsweek showed that BP did not hand out the legally required safety manual for use with Corexit, and were not provided with safety training or protective gear.
A 2012 survey of the health effects of the spill on cleanup workers reported "eye, nose and throat irritation; respiratory problems; blood in urine, vomit and rectal bleeding; seizures; nausea and violent vomiting episodes that last for hours; skin irritation, burning and lesions; short - term memory loss and confusion; liver and kidney damage; central nervous system effects and nervous system damage; hypertension; and miscarriages ''. Dr. James Diaz, writing for the American Journal of Disaster Medicine, said these ailments appearing in the Gulf reflected those reported after previous oil spills, like the Exxon Valdez. Diaz warned that "chronic adverse health effects, including cancers, liver and kidney disease, mental health disorders, birth defects and developmental disorders should be anticipated among sensitive populations and those most heavily exposed ''. Diaz also believes neurological disorders should be expected.
Two years after the spill, a study initiated by the National Institute for Occupational Safety and Health found biomarkers matching the oil from the spill in the bodies of cleanup workers. Other studies have reported a variety of mental health issues, skin problems, breathing issues, coughing, and headaches. In 2013, during the three - day "Gulf of Mexico Oil Spill & Ecosystem Science Conference '', findings discussed included a ' "significant percentage '' of Gulf residents reporting mental health problems like anxiety, depression and PTSD. These studies also showed that the bodies of former spill cleanup workers carry biomarkers of "many chemicals contained in the oil ''.
A study that investigated the health effects among children in Louisiana and Florida living less than 10 miles from the coast found that more than a third of the parents reported physical or mental health symptoms among their children. The parents reported "unexplained symptoms among their children, including bleeding ears, nose bleeds, and the early start of menstruation among girls, '' according to David Abramson, director of Columbia University 's National Center for Disaster Preparedness.
A cohort study of almost 2200 Louisiana women found "high physical / environmental exposure was significantly associated with all 13 of the physical health symptoms surveyed, with the strongest associations for burning in nose, throat or lungs; sore throat; dizziness and wheezing. Women who suffered a high degree of economic disruption as a result of spill were significantly more likely to report wheezing; headaches; watery, burning, itchy eyes and stuffy, itchy, runny nose.
The spill had a strong economic impact to BP and also the Gulf Coast 's economy sectors such as offshore drilling, fishing and tourism. Estimates of lost tourism dollars were projected to cost the Gulf coastal economy up to 22.7 billion through 2013. In addition, Louisiana reported that lost visitor spending through the end of 2010 totaled $32 million, and losses through 2013 were expected to total $153 million in this state alone. The Gulf of Mexico commercial fishing industry was estimated to have lost $247 million as a result of postspill fisheries closures. One study projects that the overall impact of lost or degraded commercial, recreational, and mariculture fisheries in the Gulf could be $8.7 billion by 2020, with a potential loss of 22,000 jobs over the same time frame. BP 's expenditures on the spill included the cost of the spill response, containment, relief well drilling, grants to the Gulf states, claims paid, and federal costs, including fines and penalties. As of March 2012, BP estimated the company 's total spill - related expenses do not exceed $37.2 billion. However, by some estimations penalties that BP may be required to pay have reached as high as $90 billion. In addition, in November 2012 the EPA announced that BP will be temporarily banned from seeking new contracts with the US government. Due to the loss of the market value, BP had dropped from the second to the fourth largest of the four major oil companies by 2013. During the crisis, BP gas stations in the United States reported a sales drop of between 10 and 40 % due to backlash against the company.
Local officials in Louisiana expressed concern that the offshore drilling moratorium imposed in response to the spill would further harm the economies of coastal communities as the oil industry directly or indirectly employs about 318,000 Louisiana residents (17 % of all jobs in the state). NOAA had closed 86,985 square miles (225,290 km), or approximately 36 % of Federal waters in the Gulf of Mexico, for commercial fishing causing $2.5 billion cost for the fishing industry. The U.S. Travel Association estimated that the economic impact of the oil spill on tourism across the Gulf Coast over a three - year period could exceed approximately $23 billion, in a region that supports over 400,000 travel industry jobs generating $34 billion in revenue annually.
On 30 April 2010 President Barack Obama ordered the federal government to hold the issuing of new offshore drilling leases and authorized investigation of 29 oil rigs in the Gulf in an effort to determine the cause of the disaster. Later a six - month offshore drilling (below 500 feet (150 m) of water) moratorium was enforced by the United States Department of the Interior. The moratorium suspended work on 33 rigs, and a group of affected companies formed the Back to Work Coalition. On 22 June, a United States federal judge on the United States District Court for the Eastern District of Louisiana Martin Leach - Cross Feldman when ruling in the case Hornbeck Offshore Services LLC v. Salazar, lifted the moratorium finding it too broad, arbitrary and not adequately justified. The ban was lifted in October 2010.
On 28 April 2010, the National Energy Board of Canada, which regulates offshore drilling in the Canadian Arctic and along the British Columbia Coast, issued a letter to oil companies asking them to explain their argument against safety rules which require same - season relief wells. On 3 May California Governor Arnold Schwarzenegger withdrew his support for a proposed plan to allow expanded offshore drilling projects in California. On 8 July, Florida Governor Charlie Crist called for a special session of the state legislature to draft an amendment to the state constitution banning offshore drilling in state waters, which the legislature rejected on 20 July.
In October 2011, the United States Department of the Interior 's Minerals Management Service was dissolved after it was determined it had exercised poor oversight over the drilling industry. Three new agencies replaced it, separating the regulation, leasing, and revenue collection responsibilities respectively, among the Bureau of Safety and Environmental Enforcement, the Bureau of Ocean Energy Management, and Office of Natural Resources Revenue.
In March 2014, BP was again allowed to bid for oil and gas leases.
On 30 April President Obama dispatched the Secretaries of the Department of Interior and Homeland Security, as well as the EPA Administrator and NOAA to the Gulf Coast to assess the disaster. In his 15 June speech, Obama said, "This oil spill is the worst environmental disaster America has ever faced... Make no mistake: we will fight this spill with everything we 've got for as long as it takes. We will make BP pay for the damage their company has caused. And we will do whatever 's necessary to help the Gulf Coast and its people recover from this tragedy. '' Interior Secretary Ken Salazar stated, "Our job basically is to keep the boot on the neck of British Petroleum. '' Some observers suggested that the Obama administration was being overly aggressive in its criticisms, which some BP investors saw as an attempt to deflect criticism of his own handling of the crisis. Rand Paul accused President Obama of being anti-business and "un-American ''.
Public opinion polls in the U.S. were generally critical of the way President Obama and the federal government handled the disaster and they were extremely critical of BPs response. Across the US, thousands participated in dozens of protests at BP gas stations and other locations, reducing sales at some stations by 10 % to 40 %.
Industry claimed that disasters are infrequent and that this spill was an isolated incident and rejected claims of a loss of industry credibility. The American Petroleum Institute (API) stated that the offshore drilling industry is important to job creation and economic growth. CEOs from the top five oil companies all agreed to work harder at improving safety. API announced the creation of an offshore safety institute, separate from API 's lobbying operation.
The Organization for International Investment, a Washington D.C. - based advocate for overseas investment in the United States, warned that the heated rhetoric was potentially damaging the reputation of British companies with operations in the United States and could spark a wave of U.S. protectionism that would restrict British firms from government contracts, political donations and lobbying.
In the UK, there was anger at the American press and news outlets for the misuse of the term "British Petroleum '' for the company -- a name which has not been used since British Petroleum merged with the American company Amoco in 1998 to form BP. It was said that the U.S. was ' dumping ' the blame onto the British people and there were calls for British Prime Minister David Cameron to protect British interests in the United States. British pension fund managers (who have large holdings of BP shares and rely upon its dividends) accepted that while BP had to pay compensation for the spill and the environmental damage, they argued that the cost to the company 's market value from President Obama 's criticism was far outweighing the direct clean - up costs.
Initially BP downplayed the incident; its CEO Tony Hayward called the amount of oil and dispersant "relatively tiny '' in comparison with the "very big ocean. '' Later, he drew an outpouring of criticism when he said that the spill was a disruption to Gulf Coast residents and himself adding, "You know, I 'd like my life back. '' BP 's chief operating officer Doug Suttles contradicted the underwater plume discussion noting, "It may be down to how you define what a plume is here... The oil that has been found is in very minute quantities. '' In June, BP launched a PR campaign and successfully bid for several search terms related to the spill on Google and other search engines so that the first sponsored search result linked directly to the company 's website. On 26 July 2010, it was announced that CEO Tony Hayward was to resign and would be replaced by Bob Dudley, who is an American citizen and previously worked for Amoco.
Hayward 's involvement in Deepwater Horizon has left him a highly controversial public figure. In May 2013, he was honored as a "distinguished leader '' by the University of Birmingham, but his award ceremony was stopped on multiple occasions by jeers and walk - outs and the focus of a protest from People & Planet members.
In July 2013, Hayward was awarded an honorary degree from Robert Gordon University. This was described as "a very serious error of judgement '' by Friends of the Earth Scotland, and "a sick joke '' by the university 's Student President.
The U.S. government rejected offers of cleanup help from Canada, Croatia, France, Germany, Ireland, Mexico, the Netherlands, Norway, Romania, South Korea, Spain, Sweden, the United Kingdom, and the United Nations. The U.S. State Department listed 70 assistance offers from 23 countries, all being initially declined, but later, 8 had been accepted. The USCG actively requested skimming boats and equipment from several countries.
In the United States the Deepwater Horizon investigation included several investigations and commissions, including reports by the USCG National Incident Commander, Admiral Thad Allen, the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling, Bureau of Ocean Energy Management, Regulation and Enforcement (BOEMRE), National Academy of Engineering, National Research Council, Government Accountability Office, National Oil Spill Commission, and Chemical Safety and Hazard Investigation Board. The Republic of the Marshall Islands Maritime Administrator conducted a separate investigation on the marine casualty. BP conducted its internal investigation.
An investigation of the possible causes of the explosion was launched on 22 April 2010 by the USCG and the Minerals Management Service. On 11 May the United States administration requested the National Academy of Engineering conduct an independent technical investigation. The National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling was established on 22 May to "consider the root causes of the disaster and offer options on safety and environmental precautions. '' The investigation by United States Attorney General Eric Holder was announced on 1 June 2010. Also the United States House Committee on Energy and Commerce conducted a number of hearings, including hearings of Tony Hayward and heads of Anadarko and Mitsui 's exploration unit. According to the US Congressional investigation, the rig 's blowout preventer, built by Cameron International Corporation, had a hydraulic leak and a failed battery, and therefore failed.
On 8 September 2010, BP released a 193 - page report on its web site. The report places some of the blame for the accident on BP but also on Halliburton and Transocean. The report found that on 20 April 2010, managers misread pressure data and gave their approval for rig workers to replace drilling fluid in the well with seawater, which was not heavy enough to prevent gas that had been leaking into the well from firing up the pipe to the rig, causing the explosion. The conclusion was that BP was partly to blame, as was Transocean, which owned the rig. Responding to the report, Transocean and Halliburton placed all blame on BP.
On 9 November 2010, a report by the Oil Spill Commission said that there had been "a rush to completion '' on the well and criticised poor management decisions. "There was not a culture of safety on that rig, '' the co-chair said.
The National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling released a final report on 5 January 2011. The panel found that BP, Halliburton, and Transocean had attempted to work more cheaply and thus helped to trigger the explosion and ensuing leakage. The report stated that "whether purposeful or not, many of the decisions that BP, Halliburton, and Transocean made that increased the risk of the Macondo blowout clearly saved those companies significant time (and money). '' BP released a statement in response to this, saying, that "even prior to the conclusion of the commission 's investigation, BP instituted significant changes designed to further strengthen safety and risk management. '' Transocean, however, blamed BP for making the decisions before the actual explosion occurred and government officials for permitting those decisions. Halliburton stated that it was acting only upon the orders of BP when it injected the cement into the wall of the well. It criticized BP for its failure to run a cement bond log test. In the report, BP was accused of nine faults. One was that it had not used a diagnostic tool to test the strength of the cement. Another was ignoring a pressure test that had failed. Still another was for not plugging the pipe with cement. The study did not, however, place the blame on any one of these events. Rather, it concluded that "notwithstanding these inherent risks, the accident of April 20 was avoidable '' and that "it resulted from clear mistakes made in the first instance by BP, Halliburton and Transocean, and by government officials who, relying too much on industry 's assertions of the safety of their operations, failed to create and apply a program of regulatory oversight that would have properly minimized the risk of deepwater drilling. '' The panel also noted that the government regulators did not have sufficient knowledge or authority to notice these cost - cutting decisions.
On 23 March 2011, BOEMRE (former MMS) and the USCG published a forensic examination report on the blowout preventer, prepared by Det Norske Veritas. The report concluded that the primary cause of failure was that the blind shear rams failed to fully close and seal due to a portion of drill pipe buckling between the shearing blocks.
The US government report issued in September 2011 stated that BP is ultimately responsible for the spill, and that Halliburton and Transocean share some of the blame. The report states that the main cause was the defective cement job, and Halliburton, BP and Transocean were, in different ways, responsible for the accident. The report stated that, although the events leading to the sinking of Deepwater Horizon were set into motion by the failure to prevent a well blowout, the investigation revealed numerous systems deficiencies, and acts and omissions by Transocean and its Deepwater Horizon crew, that had an adverse impact on the ability to prevent or limit the magnitude of the disaster. The report also states that a central cause of the blowout was failure of a cement barrier allowing hydrocarbons to flow up the wellbore, through the riser and onto the rig, resulting in the blowout. The loss of life and the subsequent pollution of the Gulf of Mexico were the result of poor risk management, last ‐ minute changes to plans, failure to observe and respond to critical indicators, inadequate well control response, and insufficient emergency bridge response training by companies and individuals responsible for drilling at the Macondo well and for the operation of the drilling platform.
On 16 June 2010, after BP executives met with President Obama, BP announced and established the Gulf Coast Claims Facility (GCCF), a $20 billion fund to settle claims arising from the Deepwater Horizon spill. This fund was set aside for natural resource damages, state and local response costs, and individual compensation, but could not be used for fines or penalties. Prior to establishing the GCCF, emergency compensation was paid by BP from an initial facility.
The GCCF was administrated by attorney Kenneth Feinberg. The facility began accepting claims on 23 August 2010. On 8 March 2012, after BP and a team of plaintiffs ' attorneys agreed to a class - action settlement, a court - supervised administrator Patrick Juneau took over administration. Until this more than one million claims of 220,000 individual and business claimants were processed and more than $6.2 billion was paid out from the fund. 97 % of payments were made to claimants in the Gulf States. In June 2012, the settlement of claims through the GCCF was replaced by the court supervised settlement program. During this transition period additional $404 million in claims were paid.
The GCCF and its administrator Feinberg had been criticized about the amount and speed of payments as well as a lack of transparency. An independent audit of the GCCF, announced by Attorney General Eric Holder, was approved by Senate on 21 October 2011. An auditor BDO Consulting found that 7,300 claimants were wrongly denied or underpaid. As a result, about $64 million of additional payments was made. The Mississippi Center for Justice provided pro bono assistance to 10,000 people to help them "navigate the complex claims process. '' In a New York Times opinion piece, Stephen Teague, staff attorney at the Mississippi Center for Justice, argued that BP had become "increasingly brazen '' in "stonewalling payments. '' "But tens of thousands of gulf residents still have n't been fully compensated for their losses, and many are struggling to make ends meet. Many low - wage workers in the fishing and service industries, for example, have been seeking compensation for lost wages and jobs for three years. ''
In July 2013 BP made a motion in court to freeze payments on tens of thousands of claims, arguing inter alia that a staff attorney from the Deepwater Horizon Court - Supervised Settlement Program, the program responsible for evaluating compensation claims, had improperly profited from claims filed by a New Orleans law firm. The attorney is said to have received portions of settlement claims for clients he referred to the firm. The federal judge assigned to the case, Judge Barbier, refused to halt the settlement program, saying he had not seen evidence of widespread fraud, adding that he was "offended by what he saw as attempts to smear the lawyer administering the claims. ''
By 26 May 2010, over 130 lawsuits relating to the spill had been filed against one or more of BP, Transocean, Cameron International Corporation, and Halliburton Energy Services, although it was considered likely by observers that these would be combined into one court as a multidistrict litigation. On 21 April 2011, BP issued $40 billion worth of lawsuits against rig owner Transocean, cementer Halliburton and blowout preventer manufacturer Cameron. The oil firm alleged failed safety systems and irresponsible behaviour of contractors had led to the explosion, including claims that Halliburton failed to properly use modelling software to analyze safe drilling conditions. The firms deny the allegations.
On 2 March 2012, BP and plaintiffs agreed to settle their lawsuits. The deal would settle roughly 100,000 claims filled by individuals and businesses affected by the spill. On 13 August, BP asked US District Judge Carl Barbier to approve the settlement, saying its actions "did not constitute gross negligence or willful misconduct ''. On 13 January 2013, Judge Barbier approved a medical - benefits portion of BP 's proposed $7.8 billion partial settlement. People living for at least 60 days along oil - impacted shores or involved in the clean - up who can document one or more specific health conditions caused by the oil or dispersants are eligible for benefits, as are those injured during clean - up. BP also agreed to spend $105 million over five years to set up a Gulf Coast health outreach program and pay for medical examinations. According to a group presenting the plaintiffs, the deal has no specific cap. BP says that it has $9.5 billion in assets set aside in a trust to pay the claims, and the settlement will not increase the $37.2 billion the company budgeted for spill - related expenses. BP originally expected to spend $7.8 billion. By October 2013 it had increased its projection to $9.2 billion, saying it could be "significantly higher. ''
On 31 August 2012, the US Department of Justice (DOJ) filed papers in federal court in New Orleans blaming BP for the Gulf oil spill, describing the spill as an example of "gross negligence and willful misconduct. '' In their statement the DOJ said that some of BP 's arguments were "plainly misleading '' and that the court should ignore BP 's argument that the Gulf region is "undergoing a robust recovery ''. BP rejected the charges saying "BP believes it was not grossly negligent and looks forward to presenting evidence on this issue at trial in January. '' The DOJ also said Transocean, the owner and operator of the Deepwater Horizon rig, was guilty of gross negligence as well.
On 14 November 2012, BP and the US Department of Justice reached a settlement. BP will pay $4.5 billion in fines and other payments, the largest of its kind in US history. In addition, the U.S. government temporarily banned BP from new federal contracts over its "lack of business integrity ''. The plea was accepted by Judge Sarah Vance of the United States District Court for the Eastern District of Louisiana on 31 January 2013. The settlement includes payments of $2.394 billion to the National Fish and Wildlife Foundation, $1.15 billion to the Oil Spill Liability Trust Fund, $350 million to the National Academy of Sciences for oil spill prevention and response research, $100 million to the North America Wetland Conservation Fund, $6 million to General Treasury and $525 million to the Securities and Exchange Commission.
On 3 January 2013 the US Justice Department announced "Transocean Deepwater Inc. has agreed to plead guilty to violating the Clean Water Act and to pay a total of $1.4 billion in civil and criminal fines and penalties ''. $800 million goes to Gulf Coast restoration Trust Fund, $300 million to the Oil Spill Liability Trust Fund, $150 million to the National Wild Turkey Federation and $150 million to the National Academy of Sciences. MOEX Offshore 2007 agreed to pay $45 million to the Oil Spill Liability Trust Fund, $25 million to five Gulf state and $20 million to supplemental environmental projects.
On 25 July 2013 Halliburton pleaded guilty to destruction of critical evidence after the oil spill and said it would pay the maximum allowable fine of $200,000 and will be subject to three years of probation.
On 9 July 2013 Alaska inventor and oil field veteran Chris McIntyre filed suit against BP, alleging that the company used his design to cap the Macondo Well without compensation. McIntyre sent BP the design for the capping device on 14 May 2010. BP subsequently used McIntyre 's design (or one very similar) to shut in the well on 15 July 2010. BP maintains that its employees first conceived of the design some days before McIntyre. Both parties agree that the device did not exist prior to 20 April 2010. The case, Christopher McIntyre v. BP Exploration & Production is currently on appeal with the United States Court for the Ninth Circuit in San Francisco. McIntyre seeks remand to the District Court of Alaska for a jury trial.
In January 2014, a panel of the U.S. Fifth Circuit Court of Appeals rejected an effort by BP to curb payment of what it described as "fictitious '' and "absurd '' claims to a settlement fund for businesses and persons affected by the oil spill. BP said administration of the 2012 settlement was marred by the fact that people without actual damages could file a claim. The court ruled that BP had n't explained "how this court or the district court should identify or even discern the existence of ' claimants that have suffered no cognizable injury. ' '' The Court then went further, calling BP 's position "nonsensical. '' The Supreme Court of the United States later refused to hear BP 's appeal after victims and claimants, along with numerous Gulf coast area chambers of commerce, objected to the oil major 's efforts to renege on the Settlement Agreement.
In September 2014, Halliburton agreed to settle a large percentage of legal claims against it by paying $1.1 billion into a trust by way of three installments over two years.
BP and its partners in the oil well, Transocean and Halliburton, went on trial on 25 February 2013 in the United States District Court for the Eastern District of Louisiana in New Orleans to determine payouts and fines under the Clean Water Act and the Natural Resources Damage Assessment. The plaintiffs included the U.S. Justice Department, Gulf states and private individuals. Tens of billions of dollars in liability and fines were at stake. A finding of gross negligence would result in a four-fold increase in the fines BP would have to pay for violating the federal Clean Water Act, and leave the company liable for punitive damages for private claims.
The trial 's first phase was to determine the liability of BP, Transocean, Halliburton, and other companies, and if they acted with gross negligence and willful misconduct. The second phase scheduled in September 2013 focused on the flow rate of the oil and the third phase scheduled in 2014 was to consider damages. According to the plaintiffs ' lawyers the major cause of an explosion was the mishandling of a rig safety test, while inadequate training of the staff, poor maintenance of the equipment and substandard cement were also mentioned as things leading to the disaster. According to The Wall Street Journal the U.S. government and Gulf Coast states had prepared an offer to BP for a $16 billion settlement. However, it was not clear if this deal had been officially proposed to BP and if BP has accepted it.
On 4 September 2014, U.S. District Judge Carl Barbier ruled BP was guilty of gross negligence and willful misconduct. He described BP 's actions as "reckless. '' He said Transocean 's and Halliburton 's actions were "negligent. '' He apportioned 67 % of the blame for the spill to BP, 30 % to Transocean, and 3 % to Halliburton. Fines would be apportioned commensurate with the degree of negligence of the parties, measured against the number of barrels of oil spilled. Under the Clean Water Act fines can be based on a cost per barrel of up to $4,300, at the discretion of the judge. The number of barrels was in dispute at the conclusion of the trial with BP arguing 2.5 million barrels were spilled over the 87 days the spill lasted, while the court contends 4.2 million barrels were spilled. BP issued a statement strongly disagreeing with the finding, and saying the court 's decision would be appealed.
Barbier ruled that BP had acted with "conscious disregard of known risks '' and rejected BP 's assertion that other parties were equally responsible for the oil spill. His ruling stated that BP "employees took risks that led to the largest environmental disaster in U.S. history, '' that the company was "reckless, '' and determined that several crucial BP decisions were "primarily driven by a desire to save time and money, rather than ensuring that the well was secure. '' The ruling means that BP, which had already spent more than $28 billion on cleanup costs and damage claims, may be liable for another $18 billion in damages, four times the Clean Water Act maximum penalties and many times more than the $3.5 billion BP had already allotted. BP strongly disagreed with the ruling and filed an immediate appeal. The size of the ruling "casts a cloud over BP 's future, '' The New York Times reported.
On 2 July 2015, BP, the U.S. Justice Department and five gulf states announced that the company agreed to pay a record settlement of $18.7 billion. To date BP 's cost for the clean - up, environmental and economic damages and penalties has reached $54 billion.
In addition to the private lawsuits and civil governmental actions, the federal government charged multiple companies and five individuals with federal crimes.
In the November 2012 resolution of the federal charges against it, BP agreed to plead guilty to 11 felony counts related to the deaths of the 11 workers and paid a $4 billion fine. Transocean 's plead guilty to a misdemeanor charge as part of its $1.4 billion fine.
In April 2012, the Justice Department filed the first criminal charge against Kurt Mix, a BP engineer, for obstructing justice by deleting messages showing that BP knew the flow rate was three times higher than initial claims by the company, and knew that "Top Kill '' was unlikely to succeed, but claimed otherwise. Three more BP employees were charged in November 2012. Site managers Donald Vidrine and Robert Kaluza were charged with manslaughter for acting negligently in their supervision of key safety tests performed on the rig prior to the explosion, and failure to alert onshore engineers of problems in the drilling operation. David Rainey, BP 's former vice-president for exploration in the Gulf of Mexico, was charged with obstructing Congress by misrepresenting the rate that oil was flowing out of the well. Lastly, Anthony Badalamenti, a Halliburton manager, was charged with instructing two employees to delete data related to Halliburton 's cementing job on the oil well.
None of the charges against individuals resulted in any prison time, and no charges were levied against upper level executives. Anthony Badalementi was sentenced to one year probation, Donald Vidrine paid a $50,000 fine and received 10 months probation, Kurt Mix received 6 months probation, and David Rainey and Robert Kaluza were acquitted.
Deepwater Horizon is a 2016 film based on the explosion directed by Peter Berg and starring Mark Wahlberg. In the US on 7 August 2015 The Runner was released, a film directed by Austin Stark and starring Nicolas Cage, which showcases the events of the oil spill following the Deepwater Horizon explosion, even if it "is a fictional story... and not totally about that environmental disaster ''.
|
list the whole numbers from 1 to 10 that are divisors of 12 300 | Table of divisors - wikipedia
The tables below list all of the divisors of the numbers 1 to 1000.
A divisor of an integer n is an integer m, for which n / m is again an integer (which is necessarily also a divisor of n). For example, 3 is a divisor of 21, since 21 / 7 = 3 (and 7 is also a divisor of 21).
If m is a divisor of n then so is − m. The tables below only list positive divisors.
|
stone temple pilots lead singer cause of death | Scott Weiland - wikipedia
Scott Richard Weiland (/ ˈwaɪlənd /; né Kline, October 27, 1967 -- December 3, 2015) was an American musician, singer and songwriter. During a career spanning three decades, Weiland was best known as the lead singer of the band Stone Temple Pilots from 1989 to 2002 and 2008 to 2013. He was also a member of supergroup Velvet Revolver from 2003 to 2008 and recorded one album with another supergroup, Art of Anarchy. He also established himself as a solo artist, releasing three studio albums, two cover albums, and collaborations with several other musicians throughout his career.
Derided by critics early in his career, Weiland was known for his flamboyant and chaotic onstage persona; he was also known for constantly changing his appearance and vocal style, for his use of a megaphone in concerts for vocal effect, and for his battles with substance abuse. Now widely viewed as a talented and versatile vocalist, Weiland has been ranked in the Top 100 Heavy Metal Vocalists by Hit Parader (No. 57).
In 2012, Weiland formed Scott Weiland and the Wildabouts. The band received mixed reviews, and some critics and fans noted Weiland 's failing health. In December 2015, Weiland died of an accidental drug overdose on his tour bus in Minnesota at the age of 48. Upon his death, many critics and peers offered re-evaluations of Weiland 's life and career; those critics included David Fricke of Rolling Stone and Billy Corgan of The Smashing Pumpkins, who identified Weiland as one of three "voices of the generation '' alongside Kurt Cobain and Layne Staley.
Weiland was born at Kaiser Hospital in San Jose, California, the son of Sharon née Williams and Kent Kline. From his father 's side, he was of German descent. At age five his stepfather David Weiland legally adopted him and Scott took his surname. Around that time, Weiland moved to Bainbridge Township, Ohio, where he later attended Kenston High School. He moved back to California as a teenager and attended Edison High School in Huntington Beach and Orange Coast College. Before devoting himself to music full - time, he worked as a paste up artist for the Los Angeles Daily Journal legal newspaper.
In 1986 Weiland met bassist Robert DeLeo at a Black Flag concert in Long Beach, California. The two of them were discussing their love interests, when they realized one of them was the same girl they were both dating. They developed a bond over the incident, and ended up moving into her vacated apartment. Weiland 's childhood friends Corey Hicock and David Allin rounded out the group, both of whom would soon be replaced by Eric Kretz and DeLeo 's brother Dean. They took the name Stone Temple Pilots because of their fondness for the initials "STP. '' In one of the band 's first opening performances as Mighty Joe Young, they opened for Electric Love Hogs, whose guitarist Dave Kushner would one day co-found Weiland 's later band Velvet Revolver. In 1992, they released their first album, Core, spawning four hits ("Sex Type Thing, '' "Wicked Garden, '' "Creep, '' and "Plush. '')
In 1994, STP released their second record, Purple, which saw the development of a more distinctive identity for the band. Like Core, Purple was a big success for the band, spawning three hit singles ("Big Empty '', "Vasoline '' and "Interstate Love Song '') and selling more than six million copies. The critical response to Purple was more favorable, with Spin magazine calling it a "quantum leap '' from the band 's previous album.
In 1995, Weiland formed the alternative rock band The Magnificent Bastards with session drummer Victor Indrizzo in San Diego. The band included Zander Schloss and Jeff Nolan on guitars and Bob Thompson on bass. Only two songs were recorded by The Magnificent Bastards, "Mockingbird Girl, '' composed by Nolan, Schloss, and Weiland, appeared in the film Tank Girl and on its soundtrack, and a cover of John Lennon 's "How Do You Sleep? '' was recorded for the tribute album, Working Class Hero: A Tribute to John Lennon. Weiland rejoined Stone Temple Pilots in the fall of 1995, but STP was forced to cancel most of their 1996 -- 1997 tour in support of their third release, Tiny Music... Songs from the Vatican Gift Shop, which sold about two million albums. Weiland encountered problems with drug addiction at this time as well, which inspired some of his songs in the late - 1990s, and resulted in prison time.
In 1999, STP regrouped once again and released No. 4. The album contained the hit single "Sour Girl '' which featured a surreal music video with Sarah Michelle Gellar. That same year, Weiland also recorded two songs with the short - lived supergroup The Wondergirls. During this time period Weiland spent five months in jail for drug possession.
In November 2000, Weiland was invited to perform on the show VH1 Storytellers with the surviving members of The Doors. Weiland did vocals on two Doors songs, "Break On Through (To the Other Side) '' and "Five to One. '' That same month Stone Temple Pilots appeared on The Doors tribute CD, Stoned Immaculate with their own rendition of "Break on Through '' as the lead track. On June 19, 2001, STP released its fifth album, Shangri - La Dee Da. That same year the band headlined the Family Values Tour along with Linkin Park, Staind and Static - X. In late 2002, the band broke up with the DeLeo brothers and Weiland having had significant altercations back stage.
In 2008, Stone Temple Pilots announced a 73 - date U.S. tour on April 7 and performed together for the first time since 2002. The reunion tour kicked off at the Rock on the Range festival on May 17, 2008. According to Dean DeLeo, steps toward a Stone Temple Pilots reunion started with a simple phone call from Weiland 's wife. She invited the DeLeo brothers to play at a private beach party, which led to the reconciliation of Weiland and the DeLeo brothers. However, Weiland said in a 2010 radio interview to promote the band 's self - titled release that the reunion was the result of Dean calling him and asking if he 'd be interested in reuniting the band to headline the Coachella Festival.
STP 's reunion tour was a success, and the band continued to tour throughout 2009 and began recording its sixth studio album. STP 's first album since 2001, Stone Temple Pilots, was released on May 25, 2010.
In September 2010, STP announced it was rescheduling several United States tour dates so that the band could take a "short break. '' STP toured Southeast Asia for the first time in 2011, playing in Philippines (Manila), Singapore and Indonesia (Jakarta). Following this, the band played successful shows in Australia, including sell out performances in Sydney and Melbourne.
The band said they were interested in a 20th anniversary tour to celebrate the release of Core with Scott commenting on January 2, 2012, "Well, we 're doing a lot of special things. (There 's) a lot of archival footage that we 're putting together, a coffee table book, hopefully a brand new album -- so many ideas. A box set and then a tour, of course. '' However, while the band did tour in 2012, they did not perform the album in its entirety as promised nor did they release a coffee table book, archival footage, or new album.
STP began to experience problems in 2012 that were said to have been caused by tensions between Weiland and the rest of the band. Despite the band 's claims that their fall tour would be celebrating the 20th anniversary of Core, this did not happen. On February 27, 2013, shortly before this solo tour was set to commence, Stone Temple Pilots announced on their website that "(...) they (had) officially terminated Scott Weiland. ''
Weiland criticized the band after they hired Linkin Park singer Chester Bennington as his replacement, claiming he was still a member and they should n't be calling themselves Stone Temple Pilots without him.
In 2002, former Guns N ' Roses members -- guitarist Slash, bassist Duff McKagan and drummer Matt Sorum -- as well as former Wasted Youth guitarist Dave Kushner were looking for a singer to help form a new band. Throughout his career Weiland had become acquainted with the four musicians; he became friends with McKagan after attending the same gym, was in rehab at the same time as Sorum and once played on the same bill as Kushner. Weiland was sent two discs of material to work with, but felt that the first disc "sounded like Bad Company gone wrong. '' When he was sent the second disc, Weiland was more positive, comparing it to Core - era Stone Temple Pilots, though he turned them down because Stone Temple Pilots had not yet separated.
When Stone Temple Pilots disbanded in 2003, the band sent Weiland new music, which he took into his studio and added vocals. This music eventually became the song "Set Me Free. '' Although he delivered the music to the band himself, Weiland was still unsure whether or not he wanted to join them, despite performing at an industry showcase at Mates. They recorded two songs with producer Nick Raskulinecz, a recorded version of "Set Me Free '' and a cover of Pink Floyd 's "Money, '' for the soundtracks to the movies The Hulk and The Italian Job, respectively. Weiland joined the band soon after, and "Set Me Free '' managed to peak at number 17 on the Mainstream Rock chart without any radio promotion or a record label. It was prior to a screening of The Hulk at Universal Studios that the band chose a name. After seeing a movie by Revolution Studios, Slash liked the beginning of the word, eventually thinking of Revolver because of its multiple meanings; the name of a gun, subtext of a revolving door which suited the band as well as the name of a Beatles album. When he suggested Revolver to the band, Weiland suggested ' Black Velvet ' Revolver, liking the idea of "something intimate like velvet juxtaposed with something deadly like a gun. '' They eventually arrived at Velvet Revolver, announcing it at a press conference and performance showcase at the El Rey Theatre while also performing the songs "Set Me Free '' and "Slither '' as well as covers of Nirvana 's "Negative Creep, '' Sex Pistols ' "Pretty Vacant '' and Guns N ' Roses ' "It 's So Easy. ''
-- Slash on Scott Weiland
Velvet Revolver 's debut album Contraband was released in June 2004 to much success. It debuted at number one on the Billboard 200 and has sold over three million copies worldwide to date. Two of the album 's songs, "Slither '' and "Fall to Pieces, '' reached number one on the Billboard Modern Rock Tracks chart. The song "Slither '' also won a Grammy Award for Best Hard Rock Performance with Vocal in 2005, an award Weiland had won previously with STP for the song "Plush '' in 1994. At the 2005 Grammy Awards, Weiland (along with the rest of Velvet Revolver) performed the Beatles song "Across the Universe '' along with Bono, Brian Wilson, Norah Jones, Stevie Wonder, Steven Tyler, Billie Joe Armstrong, Alison Krauss and Alicia Keys. On 2 July 2005, Weiland and Velvet Revolver performed at Live 8 in London, United Kingdom; in which Weiland was condemned for using strong language before the UK watershed during the performance.
Velvet Revolver released their second album, Libertad, on July 3, 2007, peaking at number five on the Billboard 200. The album 's first single "She Builds Quick Machines '' peaked at 74 on the Hot Canadian Digital Singles. The second and third singles, "The Last Fight '' and "Get Out the Door '', both peaked at number 16 and 34 on the Mainstream Rock Chart, respectively. Critical reception to the album was mixed. Though some critics praised the album and felt that Libertad gave the band an identity of their own, outside of the Guns N ' Roses and Stone Temple Pilots comparisons, others described the album as "bland '' and noted that the band seem to be "play (ing) to their strengths instead of finding a collective sound. ''
In 2007, Stone Temple Pilots guitarist Dean DeLeo discussed with Weiland an offer from a concert promoter to headline several summer festivals. Weiland accepted and said he had cleared the brief tour with his Velvet Revolver bandmates. He explained, "everything was cool. Then it was n't, '' and said the rest of the band stopped talking to him. On March 20, 2008 Weiland revealed at Velvet Revolver 's show in Glasgow that this would be the band 's final tour. After several flares on their personal blogs and in interviews, on April 1 it was announced by a number of media outlets that Weiland would no longer be in Velvet Revolver. Stone Temple Pilots subsequently reunited for a tour and Velvet Revolver began auditioning singers.
In January 2012, guitarist Dave Kushner announced Velvet Revolver would reunite with Weiland for the first time in four years for a one night, three song gig to raise money for the family of recently deceased musician John O'Brien. On what the future would hold for the band and Weiland, Kushner replied "We have n't played together in four years, and so we 're really just like, Let 's see how this goes. ''
In April 2012, Weiland remarked that he would like to reunite permanently with Velvet Revolver, saying that "if Maynard James Keenan can do it with A Perfect Circle and Tool, then there 's no reason why I should n't go and do it with both bands. '' Further in May in an interview with ABC Radio Weiland said that he had reunited with the band permanently for a tour and an album, which however was denied a few days later by Slash in an interview with 93x.
The project started in 2011, with Bumblefoot recording parts for the debut album in between touring with Guns N ' Roses. Weiland wrote and recorded the vocals after sharing the song files back and forth with Bumblefoot from 2012 to 2013. Weiland also took part in promotional photo shoots and music videos in October 2014.
Their debut album, which is self - titled, was tentatively scheduled for Spring 2015 and was released in June. On January 21, 2015 they released a 2: 06 teaser of the new album. Bumblefoot is the producer and engineer on the album. The first single to be released from the album was "' Til The Dust Is Gone ''. The album contains 11 tracks. However, Weiland distanced himself from the project, stating "It was a project I did where I was just supposed to have written the lyrics and melodies, and I was paid to do it. I did some production work on it, and the next thing I knew there were press releases that I was in the band. (...) I 'm not in the band. '' Weiland later added "It 's just something I kinda got into when I was n't doing anything else... I sang over these stereo tracks and then sent it back. But it 's not something I 'm a part of. '' After his death, Weiland was replaced in the band by former Creed vocalist Scott Stapp.
While STP went on hiatus after the release of Tiny Music..., Weiland released a solo album in 1998 called 12 Bar Blues. Weiland wrote most of the songs on the album, and collaborated with several artists, notably Daniel Lanois, Sheryl Crow, Brad Mehldau and Jeff Nolan.
On November 25, 2008, Weiland released his second solo album, "Happy '' in Galoshes, produced by Weiland and songwriting - producing partner Doug Grean. Weiland went on tour in early 2009 to promote the album.
On August 30, 2011, Weiland released a covers album, A Compilation of Scott Weiland Cover Songs, exclusively through his website. The album was originally to be released along with Scott 's autobiography until he decided to release it separately, stating, "(it) actually turned out so well that we 're going to release a single and put it out on its own, ' cause I think it 's... it 's sort of my Pin Ups, I guess you 'd say. ''
On October 4, 2011, Weiland released The Most Wonderful Time of the Year, an album consisting entirely of Christmas music. Weiland supported the album with a club tour in the United States. Two promotional recordings were taken from the album, a cover versions of "Winter Wonderland '' and "I 'll Be Home for Christmas '' with their respective music videos.
In a November 2012 interview with Rolling Stone, Weiland said he foresaw 2013 being a busy year for him and his band, The Wildabouts. Scott Weiland and The Wildabouts planned to record a new album and to go on tour.
Weiland and The Wildabouts ' "Purple at the Core '' tour commenced in March 2013 with pop / rock band MIGGS as the opening act.
In June 2014, in an interview with San Diego radio station KBZT, Weiland stated that his debut album with The Wildabouts, titled Blaster, would be released in November 2014. However, it was pushed back and eventually released on March 31, 2015. Guitarist Jeremy Brown died one day before the album 's release. The cause of death was determined to be multiple drug intoxication, with coronary atherosclerosis and cardiomegaly being significant contributing factors. Nick Maybury replaced Brown in April 2015.
In 2006, Weiland launched his own record label, Softdrive Records. Later, Weiland announced that his label signed the up - and - coming rock band, Something to Burn. On December 19, 2008 Weiland signed a publishing deal with Bug Music, allowing Weiland to "receive funding to pursue the development of creative projects and writers for Bug Music through his co-founded label, Softdrive Records. '' The deal includes Weiland 's share of the Stone Temple Pilots catalog and future solo projects. On January 21, 2009 Weiland announced the launch of his clothing line, Weiland for English Laundry, in partnership with designer Christopher Wicks.
Weiland 's vocal and musical style proved to be versatile, evolving constantly throughout his career. At the peak of Stone Temple Pilots ' success in the early to mid-1990s, Weiland displayed a deep, baritone vocal style that was initially closely compared to that of Pearl Jam singer Eddie Vedder. However, as STP continued to branch out throughout its career, so did Weiland 's vocal style. The band 's third album, Tiny Music... Songs from the Vatican Gift Shop, had Weiland singing in a much higher, raspier tone to complement the band 's more 60 's rock - influenced sound on that album. Later albums showcased Weiland 's influences ranging from bossa nova on Shangri - La Dee Da to blues rock and classic rock on the band 's 2010 self - titled album.
Weiland 's first solo record, 1998 's 12 Bar Blues, represented a huge shift in Weiland 's style, as the album featured a sound "rooted in glam rock, filtered through psychedelia and trip - hop. '' With Velvet Revolver, Weiland 's vocals ranged from his classic baritone to a rawer style to complement the band 's hard rock sound. A New York Post review of Velvet Revolver 's 2007 album Libertad commented that "Weiland 's vocals are crisp and controlled yet passionate. ''
Weiland 's second solo album, 2008 's "Happy '' in Galoshes, featured a wide variety of musical genres, such as bossa nova, country, neo-psychedelia and indie rock. Weiland 's 2011 solo effort, the Christmas album The Most Wonderful Time of the Year consisted entirely of Christmas music in a crooning style similar to that of David Bowie and Frank Sinatra, as well as some reggae and bossa nova.
Weiland married Janina Castaneda on September 17, 1994; the couple divorced in 2000. He married model Mary Forsberg on May 20, 2000. They had two children, Noah (born 2000) and Lucy (born 2002). Weiland and Forsberg divorced in 2007.
In 2005, Weiland and his son Noah were featured on comedian David Spade 's The Showbiz Show with David Spade during a comedy sketch about discouraging music file sharing. Noah has a line during the sketch in which he asks a little girl, "Please buy my daddy 's album so I can have food to eat. ''
Weiland was a Notre Dame Fighting Irish football fan, as his stepfather is an alumnus. In September 2006, Weiland performed at the University of Notre Dame 's Legends Restaurant on the night before a football game. He sang several of his solo songs as well as "Interstate Love Song '' and a cover of Pink Floyd 's "Wish You Were Here ''. In a 2007 interview with Blender magazine, Weiland mentioned that he was raised a Catholic.
Mary Forsberg Weiland 's autobiography Fall to Pieces was co-written with Larkin Warren and released in 2009. Scott Weiland 's autobiography, Not Dead & Not for Sale, co-written with David Ritz, was released May 17, 2011.
In a November 2012 interview with Rolling Stone, Weiland revealed that he was engaged to photographer Jamie Wachtel whom he met during the 2011 filming of his music video for the song, "I 'll Be Home for Christmas ''. Weiland and Wachtel married on June 22, 2013, at their Los Angeles home.
In 1995, Weiland was convicted of buying crack cocaine. He was sentenced to one year of probation. His drug use did not end after his sentence, but increased, and he moved into a hotel room for two months, next door to Courtney Love, where she said he "shot drugs the whole time '' with her.
Weiland revealed in 2001 he was diagnosed with bipolar disorder.
In a 2005 interview with Esquire, Weiland said that while performing in his first bands as a teenager, his drinking "escalated '' and he began using cocaine for the first time, which he referred to as a "sexual '' experience. In December 2007, Weiland was arrested and charged with DUI, his first arrest in over four years (since October 27, 2003). On February 7, 2008, Weiland checked into rehab and left in early March.
Weiland 's younger brother Michael died of cardiomyopathy in early 2007. The Velvet Revolver songs "For a Brother '' and "Pills, Demons, & Etc '' from the album Libertad are about Michael. Weiland said in an interview with MTV News in November 2008 that several songs on "Happy '' in Galoshes were inspired by the death of his brother and his separation from Mary Forsberg. In the same article, MTV News reported that Weiland had not done heroin since December 5, 2002. Weiland also admitted that he went through "a very short binge with coke '' in late 2007.
In April 2015, footage from a show appeared online leaving fans to question the health of Weiland, who appeared in the video to be zoned out and giving a bizarre performance. A representative for Weiland responded stating that lack of sleep, several drinks and a faulty earpiece were to blame, not drugs. In June 2015, Weiland claimed that he had been off drugs for 13 years. His response was directed towards comments made by Filter 's Richard Patrick, who claimed Weiland was using drugs and even his fans were pushing him closer to death saying "the fans are just sticking up for Scott, and they have no idea of what is going on behind the scenes and it 's actually they 're pushing him into his death, because they 're making him believe that whatever I did is acceptable, and I can be as high as I want and I can do as much drugs as I want. ''
After Weiland 's death, the tour manager for The Wildabouts, Aaron Mohler, said, "A lot of times I 've seen Scott do coke so he could drink more. ''
Shortly after his death, Jamie Weiland, Scott 's third wife, acknowledged that her husband was drinking heavily before he left on his band 's last tour, but that he promised her that he would "get it together. '' She accompanied him on the tour for a week in November and said that Scott was "just killing it '' onstage, "every night taking it up a notch. ''
It has also been revealed that Weiland had hepatitis C, which he may have acquired from intravenous drug use.
Weiland was found dead on his tour bus on December 3, 2015, in Bloomington, Minnesota, while on tour with Scott Weiland and the Wildabouts. The band 's scheduled gig that evening in nearby Medina, Minnesota, had been cancelled several days earlier. They were still planning to play the next night in Rochester, Minnesota. He was 48. Police searched Weiland 's tour bus and confirmed there were small amounts of cocaine in the bedroom where Weiland was discovered dead. Police also found prescription drugs, including Xanax, Buprenorphine, Ziprasidone, Viagra, and sleeping pills on the tour bus. Additionally, two bags of cocaine were found and a bag of a green leafy substance. Tommy Black, bassist for the Wildabouts, was arrested by police on suspicion of possession of cocaine, although the charges against him were later dropped. Despite the discovery of drugs, no underlying cause of death was immediately given, although the medical examiner later determined it to be an accidental overdose of cocaine, ethanol, and methylenedioxyamphetamine (MDA); the examiner 's office also noted his atherosclerotic cardiovascular disease, history of asthma, and prolonged substance abuse in its report.
News of Weiland 's death quickly spread throughout the Internet, with many of his fellow musical peers, including his former band members along with fans and music critics throughout the world, sharing their condolences, tributes, and memories. A day following his death, his former bandmates in Stone Temple Pilots issued a statement saying that he was "gifted beyond words '' but acknowledged his struggle with substance abuse, calling it "part of (his) curse. '' Weiland 's ex-wife, Mary Forsberg, released an open letter about her ex-husband, his addictions, and not being a good father to their children. Forsberg said, "I wo n't say he can rest now, or that he 's in a better place. He belongs with his children barbecuing in the backyard and waiting for a Notre Dame game to come on. We are angry and sad about this loss, but we are most devastated that he chose to give up. Let 's choose to make this the first time we do n't glorify this tragedy with talk of rock and roll and the demons that, by the way, do n't have to come with it. '' Weiland 's death occurred a month after the departure of his replacement Chester Bennington in Stone Temple Pilots.
A small funeral for Weiland was held at Hollywood Forever Cemetery on December 11, 2015, in Los Angeles. Members of both Stone Temple Pilots and Velvet Revolver attended. Chris Kushner, the wife of Velvet Revolver guitarist Dave Kushner, wrote on her Instagram page following the funeral, "A very sad day when (you) bury a friend. He was a good man. Do n't believe everything (you) read. Remember, we were all there. '' Mary Forsberg and the two children were not in attendance, later having a private ceremony in honor of Weiland.
In the wake of Weiland 's death, several other artists paid tribute to the singer by covering Stone Temple Pilots tunes in concert, including Life of Agony, Saint Asonia, Umphrey 's McGee, Candlebox, Halestorm, and Pop Evil, among others, while Chris Cornell dedicated a performance of "Say Hello 2 Heaven '' by Temple of the Dog to the singer.
On the Smashing Pumpkins ' website, Billy Corgan praised Weiland, saying "It was STP 's 3rd album that had got me hooked, a wizardly mix of glam and post-punk, and I confessed to Scott, as well as the band many times, how wrong I 'd been in assessing their native brilliance. And like Bowie can and does, it was Scott 's phrasing that pushed his music into a unique, and hard to pin down, aesthetic sonicsphere. Lastly, I 'd like to share a thought which, though clumsy, I hope would please Scott In Hominum. And that is if you asked me who I truly believed were the great voices of our generation, I 'd say it were he, Layne, and Kurt. ''
|
who loves you frankie valli and the four seasons | Who Loves You (song) - wikipedia
"Who Loves You '' is the title song of a 1975 album by The Four Seasons. It was composed by Bob Gaudio and Judy Parker and produced by Gaudio. It reached number 3 on the Billboard Hot 100 in November 1975.
After their release from Philips, the group signed with Motown and released one album and three singles for the organization in 1972 and 1973. All Motown recordings failed to chart in the U.S. and the company dropped the band. In August, "Who Loves You '' entered the Hot 100 as Frankie Valli 's "Swearin ' to God '' was sliding off the chart.
(* - Canadian RPM chart data incomplete for late 1975)
There were three versions of "Who Loves You '' released in the United States: the one on the Who Loves You album is four minutes, 20 seconds long and begins with a short percussion section before the start of the vocals. The A-side of the single has a 4 - minute 4 second version which starts with an unusual "fade - in '' beginning, starting with the first word of the lyrics; the B - side (labeled "Who Loves You (disco version) '') is the same as the A-side, with the instrumental break done twice and the song ending sooner.
Although the Four Seasons ' trademark falsetto is present on "Who Loves You '', Valli 's vocal performance on the recording is limited to singing lead on the verses.
For a record from a group so long without any hit records, "Who Loves You '' was a tremendous success. Released in August 1975, the single spent 20 weeks on the Hot 100 (longer than any Four Seasons single before) and managed to stay on the chart until the beginning of 1976.
This song was edited heavily and included as the closing number for the musical Jersey Boys. The second verse and instrumental break is completely omitted, and instead of the fade out, a loud, high - pitched ending chord is sung by the full company. However, the Original Broadway Cast Recording includes the instrumental break.
Often used as bumper music by late night radio talk show host Art Bell when he hosted Coast to Coast AM in the 1990s.
|
what was the number 1 box office motion picture for the week of christmas in december 1997 | List of 1997 box office number - one films in the United States - wikipedia
This is a list of films which have placed number one at the weekend box office in the United States during 1997.
|
percentage of alcohol allowed while driving in india | Alcohol laws of India - wikipedia
The legal drinking age in India and the laws which regulate the sale and consumption of alcohol vary significantly from state to state. In India, consumption of alcohol is prohibited in the states of Bihar, Gujarat and Nagaland as well as the union territory of Lakshadweep. There is a partial ban on alcohol in some districts of Manipur. All other Indian states permit alcohol consumption but fix a legal drinking age, which ranges at different ages per region. In some states, the legal drinking age can be different for different types of alcoholic beverage.
In spite of legal restrictions, alcohol consumption in India has risen over 55 % over a period of 20 years (according to OECD figures).
Alcohol is a subject in the State List under the Seventh Schedule of the Constitution of India. Therefore, the laws governing alcohol vary from state to state.
Liquor in India is generally sold at liquor stores, restaurants, hotels, bars, pubs, clubs and discos. Some states, like Kerala and Tamil Nadu, prohibit private parties from owning liquor stores making the state government the sole retailer of alcohol in those states. In some states, liquor may be sold at groceries, departmental stores, banquet halls and / or farm houses. Some tourist areas have special laws allowing the sale of alcohol on beaches and houseboats.
Home delivery of alcoholic beverages is illegal in Delhi. However, in Delhi home delivery of beer and wine by private vendors and departmental stores is permitted.
The following list is incomplete. Please help complete the list by providing references
25 others
The blood alcohol content (BAC) legal limit is 0.03 % or 30 μl alcohol in 100 ml blood.
On 1 March 2012, the Union Cabinet approved proposed changes to the Motor Vehicle Act. Higher penalties were introduced, including fines from ₹ 2,000 to ₹ 10,000 and imprisonment from 6 months to 4 years. Different penalties are assessed depending on the blood alcohol content at the time of the offence.
Dry Days are specific days when the sale of alcohol is not permitted. Most of the Indian states observe these days on major national festivals / occasions such as Republic Day (January 26), Independence Day (August 15) and Gandhi Jayanti (October 2). Dry days are also observed on and around voting days.
Prohibited days are also announced when elections are held in the state.
Every excise year, the Government of Delhi, notifies the number of Prohibited days in a year. The three national holidays -- January 26, October 2 and August 15, are always prohibited days, and additional prohibited days are announced at the start of the excise year (1 July).
† Festival date may be in either month.
In addition to the above the following days are also prohibited days:
† Festival date may be in either month.
In addition to the above the following days are also prohibited days:
† Festival date may be in either month.
In addition to the above the following days are also Prohibited days:
† Date may be in either month.
During elections, are observed the day of the vote, the day before the vote, and during vote counting.
Sundays are no longer observed as Prohibited days in the state.
Gandhi Jayanti (October 2), Independence Day (August 15) and also prohibited days are announced when elections are held in the state.
This list may vary depending on the date of festivals as well as specific Prohibited day announcements by the Government of Maharashtra.
† Festival date may be in June or July.
Prohibited days are designated on election days, plus the two days before and after the vote, and the day (s) of the count, plus one day before and one day after the counting days.
The district collector can also designate any day as a Prohibited day by giving seven days notice.
† Festival date may be in either month.
In addition to the above the following days are also Prohibited days:
In addition to the above the following days are also Prohibited days:
However no dry day rule is applicable for 5 star hotels, clubs and resorts in West Bengal. Drinks may be served and consumed in those places in West Bengal even on "dry days ''. Private consumption too is allowed on the said "dry days ''. Only the open sale of liquor at restaurants, liquor shops and other permitted places is disallowed on those days.
|
what activities and values characterize the honors college at uca | Norbert O. Schedler Honors College - wikipedia
The Norbert O. Schedler Honors College is an interdisciplinary program at the University of Central Arkansas. One of the first Honors Colleges (in contrast to numerous honors programs) in the country, the Schedler Honors College leads to the receipt of a minor in Interdisciplinary Studies. Successful completion of the minor requires a senior thesis or a supplemental senior project such as a performance, exhibit, or other creative work. The honors college derives its pedagogical underpinnings from the traditional small liberal arts college. It prides itself on small class sizes, intimate teacher / student relationships, and intense study of a variety of interdisciplinary subjects.
The college was founded in 1982 by professor of philosophy Dr. Norbert Schedler and was one of the first Honors Colleges in the United States. The first class in the spring of 1982 was composed of 60 students with an average ACT score of 26.8 and a very tight budget.
The Interdisciplinary Studies Minor is satisfied by the completion of a two - tiered system of courses. The first tier of Honors courses makes up the Honors Program. These four courses are considered the core classes and the credit from these classes is applied to the students ' general education requirements.
The typical sequence of class in the first tier is
During a student 's sophomore year, the student must complete a sophomore lecture on a subject of their choice and meet certain GPA requirements in order to continue into the Honors college, the second tier of Honors course work. The 15 credit hours in the second tier satisfy the requirements for an Interdisciplinary Studies Minor. In satisfying the minor requirements, students develop their own curriculum by selecting from a variety of course offerings.
For completion of the minor the minimum the student must complete:
Upper level seminar courses are offered in subjects such as religion, gender studies, constitutional law, ecology, storytelling, the history of science and technology, and social movements.
The Schedler Honors College offers a number of activities that supplement their standard course load they call these co-curricular events. These include Hightables, a series of lectures given by visiting academics; Soapboxes, a series of discussion groups led by Honors students or faculty; a weekly meditation group; and a Foreign Film Series. These events are usually and hour to two hours in length and happen regularly throughout the semester.
The Schedler Honors College also hosts two special events on a bi-annual basis. The first of these is called Issues in the Public Square. It is a weeklong series of lectures and discussion groups concentrating on a single theme. Events are led by students, faculty, and visiting academics. The second of these special events is called Challenge Week; this event falls on alternating years from Issues in the Public Square. Historically Challenge Week was a weeklong event but in recent years it has been expanded to two weeks to accommodate the increased number of speakers that are invited. Each Challenge Week concentrates on a theme, recent topics include ecology, intelligent design, and the cultural conflict in America. A number of guest speakers are invited for each Challenge Week, these lecturers are expert in their field and are the core of the events schedule. Hightables, Soapboxes, and roundtable discussions on related topics supplement these speakers. Past guest lecturers have included George McGovern, Ralph Nader, Ann Coulter, Michael Moore, Manning Marable, Neil Gaiman, Chuck Klosterman and Robert F. Kennedy, Jr.
|
the song hooked on a feeling by bj thomas | Hooked on a Feeling - wikipedia
"Hooked on a Feeling '' is a 1968 pop song written by Mark James and originally performed by B.J. Thomas. Thomas 's version featured the sound of the electric sitar, and reached number five in 1969 on the Billboard Hot 100. It has been recorded by many other artists, including Blue Swede, whose version reached number one in the United States in 1974. The Blue Swede version made singer Björn Skifs ' "Ooga - Chaka - Ooga - Ooga '' intro well known (and famous in Sweden at the time), although it had been used originally by British musician Jonathan King in his 1971 version of the song.
|
who came up with the germ theory of disease | Germ theory of disease - wikipedia
The germ theory of disease states that many diseases are caused by microorganisms. These small organisms, too small to see without magnification, invade humans, animals, and other living hosts. Their growth and reproduction within their hosts can cause a disease. "Germ '' may refer to not just a bacterium but to any type of microorganisms, especially one which causes disease, such as protista, fungi, virus, prion, or viroid. Microorganisms that cause disease are called pathogens, and the diseases they cause are called infectious diseases. Even when a pathogen is the principal cause of a disease, environmental and hereditary factors often influence the severity of the disease, and whether a potential host individual becomes infected when exposed to the pathogen.
The germ theory was proposed by Girolamo Fracastoro in 1546, and expanded upon by Marcus von Plenciz in 1762. Such views were held in disdain, however, and Galen 's miasma theory remained dominant among scientists and doctors. The nature of this doctrine prevented them from understanding how diseases actually progressed, with predictable consequences. By the early nineteenth century, smallpox vaccination was commonplace in Europe, though doctors were unaware of how it worked or how to extend the principle to other diseases. Similar treatments had been prevalent in India from just before 1000 A.D. A transitional period began in the late 1850s as the work of Louis Pasteur and Robert Koch provided convincing evidence; by 1880, the miasma theory was struggling to compete with the germ theory of disease. Eventually, a "golden era '' of bacteriology ensued, during which the theory quickly led to the identification of the actual organisms that cause many diseases. Viruses were discovered in the 1890s.
The miasma theory was the predominant theory of disease transmission before the germ theory took hold towards the end of the 19th century. It held that diseases such as cholera, chlamydia infection, or the Black Death were caused by a miasma (μίασμα, Ancient Greek: "pollution ''), a noxious form of "bad air '' emanating from rotting organic matter. Miasma was considered to be a poisonous vapor or mist filled with particles from decomposed matter (miasmata) that was identifiable by its foul smell. The theory posited that diseases were the product of environmental factors such as contaminated water, foul air, and poor hygienic conditions. Such infections, according to the theory, were not passed between individuals but would affect those within a locale that gave rise to such vapors.
In Antiquity, the Greek historian Thucydides (c. 460 -- c. 400 BC) was the first person to state, in his account of the plague of Athens, that diseases could spread from an infected person to others. One theory of the spread of contagious diseases that were not spread by direct contact was that they were spread by "seeds '' (Latin: semina) that were present in the air. In his poem, De rerum natura (On the Nature of Things, ca. 56 BC), the Roman poet Lucretius (ca. 99 BC -- ca. 55 BC) stated that the world contained various "seeds '', some of which could sicken a person if they were inhaled or if they contaminated his food. The Roman statesman Marcus Terentius Varro (116 -- 27 BC) wrote, in his Rerum rusticarum libri III (Three Books on Agriculture, 36 BC): "Precautions must also be taken in the neighborhood of swamps (...) because there are bred certain minute creatures which can not be seen by the eyes, which float in the air and enter the body through the mouth and nose and there cause serious diseases. '' The Greek physician Galen (129 AD -- ca. 200 / ca. 216 AD) speculated in his On Initial Causes (ca. 175 AD) that some patients might have "seeds of fever ''. In his On the Different Types of Fever (ca. 175 AD), Galen speculated that plagues were spread by "certain seeds of plague '', which were present in the air. And in his Epidemics (ca. 176 -- 178 AD), Galen explained that patients might relapse during recovery from a fever because some "seed of the disease '' lurked in their bodies, which would cause a recurrence of the disease if the patients did n't follow a physician 's therapeutic regimen.
During the Middle Ages, Isidore of Seville (ca. 560 -- 636) mentioned "plague - bearing seeds '' (pestifera semina) in his On the Nature of Things (ca. 613 AD). Later in 1345, Tommaso del Garbo (ca. 1305 -- 1370) of Bologna, Italy mentioned Galen 's "seeds of plague '' in his work Commentaria non parum utilia in libros Galeni (Helpful commentaries on the books of Galen).
The Italian scholar and physician Girolamo Fracastoro proposed in 1546 in his book De Contagione et Contagiosis Morbis that epidemic diseases are caused by transferable seed - like entities (seminaria morbi) that transmit infection by direct or indirect contact, or even without contact over long distances. The diseases were categorised based on how they were transmitted, and how long they could lie dormant.
Italian physician Francesco Redi provided early evidence against spontaneous generation. He devised an experiment in 1668 in which he used three jars. He placed a meatloaf and egg in each of the three jars. He had one of the jars open, another one tightly sealed, and the last one covered with gauze. After a few days, he observed that the meatloaf in the open jar was covered by maggots, and the jar covered with gauze had maggots on the surface of the gauze. However, the tightly sealed jar had no maggots inside or outside it. He also noticed that the maggots were found only on surfaces that were accessible by flies. From this he concluded that spontaneous generation is not a plausible theory.
Microorganisms are said to have been first directly observed in the 1670s by Anton van Leeuwenhoek, an early pioneer in microbiology. Yet Athanasius Kircher may have done so prior. When Rome was struck by the bubonic plague in 1656, Kircher spent days on end caring for the sick. Searching for a cure, Kircher observed microorganisms under the microscope and invented the germ theory of disease, which he outlined in his Scrutinium pestis physico - medicum (Rome 1658). Building on Leeuwenhoek 's work, physician Nicolas Andry argued in 1700 that microorganisms he called "worms '' were responsible for smallpox and other diseases.
In 1720, Richard Bradley theorised that the plague and ' all pestilential distempers ' were caused by ' poisonous insects ', living creatures viewable only with the help of microscopes.
In 1762, the Austrian physician Marcus Antonius von Plenciz (1705 - 1786) published a book titled Opera medico - physica. It outlined a theory of contagion stating that specific ' animalculae ' in the soil and the air were responsible for causing specific diseases. Von Plenciz noted the distinction between diseases which are both epidemic and contagious (like measles and dysentry), and diseases which are contagious but not epidemic (like rabies and leprosy). The book cites Anton van Leeuwenhoek to show how ubitquitous such animalculae are, and was unique for describing the presence of germs in ulcerating wounds. Ultimately, the theory espoused by von Plenciz was not accepted by the scientific community.
The Italian Agostino Bassi was the first person to prove that a disease was caused by a microorganism when he conducted a series of experiments between 1808 and 1813, demonstrating that a "vegetable parasite '' caused a disease in silkworms known as calcinaccio -- this disease was devastating the French silk industry at the time. The "vegetable parasite '' is now known to be a fungus pathogenic to insects called Beauveria bassiana (named after Bassi).
Ignaz Semmelweis, a Hungarian obstetrician working at the Vienna General Hospital (Allgemeines Krankenhaus) in 1847, noticed the dramatically high maternal mortality from puerperal fever following births assisted by doctors and medical students. However, those attended by midwives were relatively safe. Investigating further, Semmelweis made the connection between puerperal fever and examinations of delivering women by doctors, and further realized that these physicians had usually come directly from autopsies. Asserting that puerperal fever was a contagious disease and that matter from autopsies were implicated in its development, Semmelweis made doctors wash their hands with chlorinated lime water before examining pregnant women, thereby reducing the mortality rate from 18 % to 2.2 % at his hospital. Nevertheless, he and his theories were rejected by most of the contemporary medical establishment.
Gideon Mantell, the Sussex doctor more famous for discovering dinosaur fossils, spent time with his microscope, and speculated in his Thoughts On Animalcules (1850) that perhaps "many of the most serious maladies which afflict humanity, are produced by peculiar states of invisible animalcular life ''.
John Snow was a skeptic of the then - dominant miasma theory. Even though the germ theory of disease pioneered by Girolamo Fracastoro had not yet achieved full development or widespread currency, Snow demonstrated a clear understanding of germ theory in his writings. He first published his theory in an 1849 essay On the Mode of Communication of Cholera, in which he correctly suggested that the fecal - oral route was the mode of communication, and that the disease replicated itself in the lower intestines. He even proposed in his 1855 edition of the work, that the structure of cholera was that of a cell.
Having rejected effluvia and the poisoning of the blood in the first instance, and being led to the conclusion that the disease is something that acts directly on the alimentary canal, the excretions of the sick at once suggest themselves as containing some material which being accidentally swallowed might attach itself to the mucous membrane of the small intestines, and there multiply itself by appropriation of surrounding matter, in virtue of molecular changes going on within it, or capable of going on, as soon as it is placed in congenial circumstances.
For the morbid matter of cholera having the property of reproducing its own kind, must necessarily have some sort of structure, most likely that of a cell. It is no objection to this view that the structure of the cholera poison can not be recognized by the microscope, for the matter of smallpox and of chancre can only be recognized by their effects, and not by their physical properties.
Snow 's 1849 recommendation that water be "filtered and boiled before it is used '' is one of the first practical applications of germ theory in the area of public health and is the antecedent to the modern boil - water advisory.
In 1855 he published a second edition of his article, documenting his more elaborate investigation of the effect of the water supply in the Soho, London epidemic of 1854.
By talking to local residents, he identified the source of the outbreak as the public water pump on Broad Street (now Broadwick Street). Although Snow 's chemical and microscope examination of a water sample from the Broad Street pump did not conclusively prove its danger, his studies of the pattern of the disease were convincing enough to persuade the local council to disable the well pump by removing its handle. This action has been commonly credited as ending the outbreak, but Snow observed that the epidemic may have already been in rapid decline.
Snow later used a dot map to illustrate the cluster of cholera cases around the pump. He also used statistics to illustrate the connection between the quality of the water source and cholera cases. He showed that the Southwark and Vauxhall Waterworks Company was taking water from sewage - polluted sections of the Thames and delivering the water to homes, leading to an increased incidence of cholera. Snow 's study was a major event in the history of public health and geography. It is regarded as one of the founding events of the science of epidemiology.
Later, researchers discovered that this public well had been dug only three feet from an old cesspit, which had begun to leak fecal bacteria. The diapers of a baby, who had contracted cholera from another source, had been washed into this cesspit. Its opening was originally under a nearby house, which had been rebuilt farther away after a fire. The city had widened the street and the cesspit was lost. It was common at the time to have a cesspit under most homes. Most families tried to have their raw sewage collected and dumped in the Thames to prevent their cesspit from filling faster than the sewage could decompose into the soil.
After the cholera epidemic had subsided, government officials replaced the handle on the Broad Street pump. They had responded only to the urgent threat posed to the population, and afterward they rejected Snow 's theory. To accept his proposal would have meant accepting the fecal - oral method transmission of disease, which they dismissed.
The more formal experiments on the relationship between germ and disease were conducted by Louis Pasteur between 1860 and 1864. He discovered the pathology of the puerperal fever and the pyogenic vibrio in the blood, and suggested using boric acid to kill these microorganisms before and after confinement.
Pasteur further demonstrated between 1860 and 1864 that fermentation and the growth of microorganisms in nutrient broths did not proceed by spontaneous generation. He exposed freshly boiled broth to air in vessels that contained a filter to stop all particles passing through to the growth medium, and even with no filter at all, with air being admitted via a long tortuous tube that would not pass dust particles. Nothing grew in the broths: therefore the living organisms that grew in such broths came from outside, as spores on dust, rather than being generated within the broth.
Pasteur discovered that another serious disease of silkworms, pébrine, was caused by a small microscopic organism now known as Nosema bombycis (1870). Pasteur saved France 's silk industry by developing a method to screen silkworms eggs for those that were not infected, a method that is still used today to control this and other silkworm diseases.
Robert Koch is known for developing four basic criteria (known as Koch 's postulates) for demonstrating, in a scientifically sound manner, that a disease is caused by a particular organism. These postulates grew out of his seminal work with anthrax using purified cultures of the pathogen that had been isolated from diseased animals.
Koch 's postulates were developed in the 19th century as general guidelines to identify pathogens that could be isolated with the techniques of the day. Even in Koch 's time, it was recognized that some infectious agents were clearly responsible for disease even though they did not fulfill all of the postulates. Attempts to rigidly apply Koch 's postulates to the diagnosis of viral diseases in the late 19th century, at a time when viruses could not be seen or isolated in culture, may have impeded the early development of the field of virology. Currently, a number of infectious agents are accepted as the cause of disease despite their not fulfilling all of Koch 's postulates. Therefore, while Koch 's postulates retain historical importance and continue to inform the approach to microbiologic diagnosis, fulfillment of all four postulates is not required to demonstrate causality.
Koch 's postulates have also influenced scientists who examine microbial pathogenesis from a molecular point of view. In the 1980s, a molecular version of Koch 's postulates was developed to guide the identification of microbial genes encoding virulence factors.
Koch 's postulates:
However, Koch abandoned the universalist requirement of the first postulate altogether when he discovered asymptomatic carriers of cholera and, later, of typhoid fever. Asymptomatic or subclinical infection carriers are now known to be a common feature of many infectious diseases, especially viruses such as polio, herpes simplex, HIV, and hepatitis C. As a specific example, all doctors and virologists agree that poliovirus causes paralysis in just a few infected subjects, and the success of the polio vaccine in preventing disease supports the conviction that the poliovirus is the causative agent.
The third postulate specifies "should '', not "must '', because as Koch himself proved in regard to both tuberculosis and cholera, not all organisms exposed to an infectious agent will acquire the infection. Noninfection may be due to such factors as general health and proper immune functioning; acquired immunity from previous exposure or vaccination; or genetic immunity, as with the resistance to malaria conferred by possessing at least one sickle cell allele.
The second postulate may also be suspended for certain microorganisms or entities that can not (at the present time) be grown in pure culture, such as prions responsible for Creutzfeldt -- Jakob disease. In summary, a body of evidence that satisfies Koch 's postulates is sufficient but not necessary to establish causation.
In the 1870s, Joseph Lister was instrumental in developing practical applications of the germ theory of disease with respect to sanitation in medical settings and aseptic surgical techniques -- partly through the use of carbolic acid (phenol) as an antiseptic.
|
who plays andre fields on one tree hill | Robbie Jones (actor) - Wikipedia
Robert Lee "Robbie '' Jones III (born September 25, 1977) is an American actor. He is best known for his role as Quentin Fields in One Tree Hill. In 2009, Jones starred in the film Hurricane Season with Forest Whitaker. In 2010, he starred in the series Hellcats, in which he portrayed Lewis Flynn. His most recent film role was in Tyler Perry 's Temptation: Confessions of a Marriage Counselor alongside Jurnee Smollett - Bell and Lance Gross.
Jones attended the University of California, Berkeley and also played on the California Golden Bears men 's basketball team from 1996 to 2000.
|
when is the champions league knockout stage draw | 2017 -- 18 UEFA Champions League knockout phase - wikipedia
The 2017 -- 18 UEFA Champions League knockout phase began on 13 February and will end on 26 May 2018 with the final at the NSC Olimpiyskiy Stadium in Kiev, Ukraine, to decide the champions of the 2017 -- 18 UEFA Champions League. A total of 16 teams compete in the knockout phase.
Times up to 24 March 2018 (round of 16) are CET (UTC + 1), thereafter (quarter - finals and beyond) times are CEST (UTC + 2).
The schedule of the knockout phase is as follows (all draws are held at the UEFA headquarters in Nyon, Switzerland).
The knockout phase involves the 16 teams which qualify as winners and runners - up of each of the eight groups in the group stage.
Each tie in the knockout phase, apart from the final, is played over two legs, with each team playing one leg at home. The team that scores more goals on aggregate over the two legs advance to the next round. If the aggregate score is level, the away goals rule is applied, i.e. the team that scored more goals away from home over the two legs advances. If away goals are also equal, then thirty minutes of extra time is played. The away goals rule is again applied after extra time, i.e. if there are goals scored during extra time and the aggregate score is still level, the visiting team advances by virtue of more away goals scored. If no goals are scored during extra time, the tie is decided by penalty shoot - out. In the final, which is played as a single match, if scores are level at the end of normal time, extra time is played, followed by penalty shoot - out if scores remain tied.
The mechanism of the draws for each round is as follows:
The full bracket will be known after the semi-final draw.
The draw for the round of 16 was held on 11 December 2017, 12: 00 CET.
The first legs were played on 13, 14, 20 and 21 February, and the second legs will be played on 6, 7, 13 and 14 March 2018.
Juventus v Tottenham Hotspur
Tottenham Hotspur v Juventus
Basel v Manchester City
Manchester City v Basel
Porto v Liverpool
Liverpool v Porto
Sevilla v Manchester United
Manchester United v Sevilla
Real Madrid v Paris Saint - Germain
Paris Saint - Germain v Real Madrid
Shakhtar Donetsk v Roma
Roma v Shakhtar Donetsk
Chelsea v Barcelona
Barcelona v Chelsea
Bayern Munich v Beşiktaş
Beşiktaş v Bayern Munich
The draw for the quarter - finals will be held on 16 March 2018, 12: 00 CET.
The first legs will be played on 3 and 4 April, and the second legs will be played on 10 and 11 April 2018.
The draw for the semi-finals will be held on 13 April 2018, 12: 00 CEST.
The first legs will be played on 24 and 25 April, and the second legs will be played on 1 and 2 May 2018.
The 2018 UEFA Champions League Final will be played at the NSC Olimpiyskiy in Kiev on 26 May 2018. The "home '' team for the final (for administrative purposes) will be determined by an additional draw held after the semi-final draw.
TBD v TBD
|
how many episodes of famous in love season 1 | Famous in Love - wikipedia
Famous in Love is an American drama television series that premiered on Freeform on April 18, 2017, and is based on the novel of the same name by Rebecca Serle. The series stars Bella Thorne, Charlie DePew, Georgie Flores, Carter Jenkins, Niki Koss, Keith Powers, Pepi Sonuga, and Perrey Reeves.
Paige Townsen, an ordinary college student, gets her big break after auditioning for the starring role in a Hollywood blockbuster and must now navigate her new star - studded life and undeniable chemistry with her co-lead and her best friend.
Freeform, then known as ABC Family, had picked up the pilot for fast - track development on March 19, 2015. The pilot was shot in November 2015; Freeform greenlit the pilot on April 7, 2016, and shooting began on July 13, 2016, and wrapped up on October 18, 2016. On November 18, 2016, Freeform announced that the series would premiere on April 18, 2017. Freeform also released the entire season for viewing online on April 18, 2017. The series is based on the novel of the same name, written by Rebecca Serle. Serle worked with I. Marlene King to develop the novel into a television series. Freeform renewed the series for a second season on August 3, 2017.
|
jorge luis borges the lottery in babylon summary | The Lottery in Babylon - Wikipedia
"The Lottery in Babylon '' (or "The Babylon Lottery ''; original Spanish "La lotería en Babilonia '') is a fantasy short story by Argentinian writer Jorge Luis Borges. It first appeared in 1941 in the literary magazine Sur, and was then included in the 1941 collection The Garden of Forking Paths (El jardín de los senderos que se bifurcan), which in turn became the part one of Ficciones (1944).
The story describes a mythical Babylon in which all activities are dictated by an all - encompassing lottery, a metaphor for the role of chance in one 's life. Initially, the lottery was run as a lottery would be, with tickets purchased and the winner receiving a monetary reward. Later, punishments and larger monetary rewards were introduced. Further, participation became mandatory for all but the elite. Finally, it simultaneously became so all - encompassing and so secret some whispered "the Company has never existed, and never will. ''
A further interpretation is that the Lottery and the Company that runs it are actually an allegory of a deity or Zeus. Like the workings of a deity in the eyes of men, the Company that runs the Lottery acts, apparently, at random and through means not known by its subjects, leaving men with two options: to accept it to be all - knowing and all - powerful but mysterious, or to deny its existence. Both theories have supporters in this allegory.
In many other books, Borges dealt with metaphysical questions about the meaning of life and the possible existence of higher authorities, and also presented this same paradoxical vision of a world that may be run by a good and wise deity but seems to lack any discernible meaning. This view may also be considered present in "The Library of Babel '' ("La biblioteca de Babel ''), another Borges story.
Borges makes a brief reference to Franz Kafka as Qaphqa, the legendary Latrine where spies of the Company leave information.
|
where does the sunrise first in the us | Cadillac Mountain - wikipedia
Cadillac Mountain is located on Mount Desert Island, within Acadia National Park. With an elevation of 1,530 feet (466 meters), its summit is the highest point in Hancock County and the highest within 25 miles (40 km) of the shoreline of the North American continent between the Cape Breton Highlands, Nova Scotia and Mexican peaks 180 miles (290 km) south of the Texas border.
Before being renamed in 1918, the mountain had been called Green Mountain. The new name honors the French explorer and adventurer Antoine Laumet de La Mothe, sieur de Cadillac. In 1688, De la Mothe requested and received from the Governor of New France a parcel of land in an area known as Donaquec which included part of the Donaquec River (now the Union River) and the island of Mount Desert in the present - day U.S. state of Maine. Antoine Laumet de La Mothe, a shameless self - promoter who had already appropriated the "de la Mothe '' portion of his name from a local nobleman in his native Picardy, thereafter referred to himself as Antoine de la Mothe, sieur de Cadillac, Donaquec, and Mount Desert.
From 1883 until 1893 the Green Mountain Cog Railway ran to the summit to take visitors to the Green Mountain Hotel on the summit. The hotel was burned down in 1895. Also in 1895, the cog train was sold and moved to the Mount Washington Cog Railway in New Hampshire.
There are various hiking trails to the summit of Cadillac Mountain, some more challenging than others. There is also a paved road to the top.
Driving or hiking to the summit of Cadillac Mountain to see "the nation 's first sunrise '' is a popular activity among visitors of Acadia National Park. However, Cadillac only sees the first sunrise in the fall and winter, when the sun rises south of due east. During most of the spring and summer, the sun rises first on Mars Hill, 150 miles (240 km) to the northeast. For a few weeks around the equinoxes, the sun rises first at West Quoddy Head in Lubec, Maine.
On exceptionally clear days, it is possible to see Mount Katahdin, Maine 's highest mountain, to the north and the Canadian province of Nova Scotia to the east, both over one hundred miles away.
The highway to Cadillac Mountain
The first rays of sun in the United States as seen from Cadillac Mountain
Sun rising from atop Cadillac
Sunrise with the Porcupine Islands in the foreground
The Porcupine Islands at noon
Sunrise from atop Cadillac
Sunrise with Dorr Mountain in the foreground, and the Atlantic Ocean in the background
View of the sunrise from Cadillac Mountain
View of the Atlantic Ocean from Cadillac Mountain
Another view from the top of Cadillac Mountain
Tourists on Cadillac Mountain
The rocky terrain atop Cadillac Mountain
Plaque honoring Stephen Mather, first director of the National Park Service
|
who was the sultan of delhi when tanu invaded the city | Delhi Sultanate - Wikipedia
The Delhi Sultanate (Persian: دهلی سلطان, Urdu: دہلی سلطنت ) was a Muslim sultanate based mostly in Delhi that stretched over large parts of the Indian subcontinent for 320 years (1206 -- 1526). Five dynasties ruled over the Delhi Sultanate sequentially: the Mamluk dynasty (1206 -- 1290), the Khalji dynasty (1290 -- 1320), the Tughlaq dynasty (1320 -- 1414), the Sayyid dynasty (1414 -- 1451), and the Lodi dynasty (1451 -- 1526). The sultanate is noted for being one of the few states to repel an attack by the Mongol Empire, and enthroned one of the few female rulers in Islamic history, Razia Sultana, who reigned from 1236 to 1240.
Qutb al - Din Aibak, a former Turkic Mamluk slave of Muhammad Ghori, was the first sultan of Delhi, and his Mamluk dynasty conquered large areas of northern India. Afterwards, the Khalji dynasty was also able to conquer most of central India, but both failed to conquer the whole of the Indian subcontinent. The sultanate reached the peak of its geographical reach during the Tughlaq dynasty, occupying most of the Indian subcontinent. This was followed by decline due to Hindu reconquests, states such as the Vijayanagara Empire asserting independence, and new Muslim sultanates such as the Bengal Sultanate breaking off.
During and in the Delhi Sultanate, there was a synthesis of Indian civilization with that of Islamic civilization, and the further integration of the Indian subcontinent with a growing world system and wider international networks spanning large parts of Afro - Eurasia, which had a significant impact on Indian culture and society, as well as the wider world. The time of their rule included the earliest forms of Indo - Islamic architecture, increased growth rates in India 's population and economy, and the emergence of the Hindi - Urdu language. The Delhi Sultanate was also responsible for repelling the Mongol Empire 's potentially devastating invasions of India in the 13th and 14th centuries. However, the Delhi Sultanate also caused large scale destruction and desecration of temples in the Indian subcontinent. In 1526, the Sultanate was conquered and succeeded by the Mughal Empire.
The context behind the rise of the Delhi Sultanate in India was part of a wider trend affecting much of the Asian continent, including the whole of southern and western Asia: the influx of nomadic Turkic peoples from the Central Asian steppes. This can be traced back to the 9th century, when the Islamic Caliphate began fragmenting in the Middle East, where Muslim rulers in rival states began enslaving non-Muslim nomadic Turks from the Central Asian steppes, and raising many of them to become loyal military slaves called Mamluks. Soon, Turks were migrating to Muslim lands and becoming Islamicized. Many of the Turkic Mamluk slaves eventually rose up to become rulers, and conquered large parts of the Muslim world, establishing Mamluk Sultanates from Egypt to Afghanistan, before turning their attention to the Indian subcontinent.
It is also part of a longer trend predating the spread of Islam. Like other settled, agrarian societies in history, those in the Indian subcontinent have been attacked by nomadic tribes throughout its long history. In evaluating the impact of Islam on the subcontinent, one must note that the north - western subcontinent was a frequent target of tribes raiding from Central Asia in the pre-Islamic era. In that sense, the Muslim intrusions and later Muslim invasions were not dissimilar to those of the earlier invasions during the 1st millennium.
By 962 AD, Hindu and Buddhist kingdoms in South Asia were under a wave of raids from Muslim armies from Central Asia. Among them was Mahmud of Ghazni, the son of a Turkic Mamluk military slave, who raided and plundered kingdoms in north India from east of the Indus river to west of Yamuna river seventeen times between 997 and 1030. Mahmud of Ghazni raided the treasuries but retracted each time, only extending Islamic rule into western Punjab.
The wave of raids on north Indian and western Indian kingdoms by Muslim warlords continued after Mahmud of Ghazni. The raids did not establish or extend permanent boundaries of their Islamic kingdoms. The Ghurid sultan Mu'izz ad - Din Muhammad Ghori, commonly known as Muhammad of Ghor, began a systematic war of expansion into north India in 1173. He sought to carve out a principality for himself by expanding the Islamic world. Muhammad of Ghor sought a Sunni Islamic kingdom of his own extending east of the Indus river, and he thus laid the foundation for the Muslim kingdom called the Delhi Sultanate. Some historians chronicle the Delhi Sultanate from 1192 due to the presence and geographical claims of Muhammad Ghori in South Asia by that time.
Ghori was assassinated in 1206, by Ismāʿīlī Shia Muslims in some accounts or by Hindu Khokhars in others. After the assassination, one of Ghori 's slaves (or mamluks, Arabic: مملوك), the Turkic Qutb al - Din Aibak, assumed power, becoming the first Sultan of Delhi.
Qutb al - Din Aibak, a former slave of Mu'izz ad - Din Muhammad Ghori (known more commonly as Muhammad of Ghor), was the first ruler of the Delhi Sultanate. Aibak was of Cuman - Kipchak (Turkic) origin, and due to his lineage, his dynasty is known as the Mamluk (Slave) dynasty (not to be confused with the Mamluk dynasty of Iraq or the Mamluk dynasty of Egypt). Aibak reigned as the Sultan of Delhi for four years, from 1206 to 1210.
After Aibak died, Aram Shah assumed power in 1210, but he was assassinated in 1211 by Shams ud - Din Iltutmish. Iltutmish 's power was precarious, and a number of Muslim amirs (nobles) challenged his authority as they had been supporters of Qutb al - Din Aibak. After a series of conquests and brutal executions of opposition, Iltutmish consolidated his power. His rule was challenged a number of times, such as by Qubacha, and this led to a series of wars. Iltumish conquered Multan and Bengal from contesting Muslim rulers, as well as Ranthambore and Siwalik from the Hindu rulers. He also attacked, defeated, and executed Taj al - Din Yildiz, who asserted his rights as heir to Mu'izz ad - Din Muhammad Ghori. Iltutmish 's rule lasted till 1236. Following his death, the Delhi Sultanate saw a succession of weak rulers, disputing Muslim nobility, assassinations, and short - lived tenures. Power shifted from Rukn ud - Din Firuz to Razia Sultana and others, until Ghiyas ud - Din Balban came to power and ruled from 1266 to 1287. He was succeeded by 17 - year - old Muiz ud - Din Qaiqabad, who appointed Jalal ud - Din Firuz Khalji as the commander of the army. Khalji assassinated Qaiqabad and assumed power, thus ending the Mamluk dynasty and starting the Khalji dynasty.
Qutb al - Din Aibak initiated the construction of the Qutub Minar and the Quwwat - ul - Islam (Might of Islam) Mosque, now a UNESCO world heritage site. It was built from the remains of twenty seven demolished Hindu and Jain temples. The Qutub Minar Complex or Qutb Complex was expanded by Iltutmish, and later by Ala ud - Din Khalji (the second ruler of the Khalji dynasty) in the early 14th century. During the Mamluk dynasty, many nobles from Afghanistan and Persia migrated and settled in India, as West Asia came under Mongol siege.
The Khalji dynasty was of Turko - Afghan heritage. They were originally of Turkic origin. They had long been settled in present - day Afghanistan before proceeding to Delhi in India. The name "Khalji '' refers to an Afghan village or town known as Qalat - e Khalji (Fort of Ghilji) which was established by Khalaj people who settled in present day Afghanistan. Sometimes they were treated by others as ethnic Afghans due to their intermarraiges with local Afghans, adoption of Afghan habits and customs. As a result of this, the dynasty is sometimes referred to as Turko - Afghan. The dynasty later also had Indian ancestry, through Jhatyapali (daughter of Ramachandra of Devagiri), wife of Alauddin Khalji and mother of Shihabuddin Omar.
The first ruler of the Khalji dynasty was Jalal ud - Din Firuz Khalji. Firuz Khalji had already gathered enough support among the Afghans for taking over the crown. He came to power in 1290 after killing the last ruler of the Mamluk dynasty, Muiz ud - Din Qaiqabad, with the support of Afghan and Turkic nobles. He was around 70 years old at the time of his ascension, and was known as a mild - mannered, humble and kind monarch to the general public. Jalal ud - Din Firuz was of Turkic Khalaj origin, and ruled for 6 years before he was murdered in 1296 by his nephew and son - in - law Juna Muhammad Khalji, who later came to be known as Ala ud - Din Khalji.
Ala ud - Din began his military career as governor of Kara province, from where he led two raids on Malwa (1292) and Devagiri (1294) for plunder and loot. His military campaigning returned to these lands as well other south Indian kingdoms after he assumed power. He conquered Gujarat, Ranthambore, Chittor, and Malwa. However, these victories were cut short because of Mongol attacks and plunder raids from the north - west. The Mongols withdrew after plundering and stopped raiding north - west parts of the Delhi Sultanate.
After the Mongols withdrew, Ala ud - Din Khalji continued expanding the Delhi Sultanate into southern India with the help of generals such as Malik Kafur and Khusro Khan. They collected lots of war booty (anwatan) from those they defeated. His commanders collected war spoils and paid ghanima (Arabic: الْغَنيمَة, a tax on spoils of war), which helped strengthen the Khalji rule. Among the spoils was the Warangal loot that included the famous Koh - i - noor diamond.
Ala ud - Din Khalji changed tax policies, raising agriculture taxes from 20 % to 50 % (payable in grain and agricultural produce), eliminating payments and commissions on taxes collected by local chiefs, banned socialization among his officials as well as inter-marriage between noble families to help prevent any opposition forming against him, and he cut salaries of officials, poets, and scholars. These tax policies and spending controls strengthened his treasury to pay the keep of his growing army; he also introduced price controls on all agriculture produce and goods in the kingdom, as well as controls on where, how, and by whom these goods could be sold. Markets called "shahana - i - mandi '' were created. Muslim merchants were granted exclusive permits and monopoly in these "mandis '' to buy and resell at official prices. No one other than these merchants could buy from farmers or sell in cities. Those found violating these "mandi '' rules were severely punished, often by mutilation. Taxes collected in the form of grain were stored in the kingdom 's storage. During famines that followed, these granaries ensured sufficient food for the army.
Historians note Ala ud - Din Khalji as being a tyrant. Anyone Ala ud - Din suspected of being a threat to this power was killed along with the women and children of that family. In 1298, between 15,000 and 30,000 people near Delhi, who had recently converted to Islam, were slaughtered in a single day, due to fears of an uprising. He is also known for his cruelty against kingdoms he defeated in battle.
After Ala ud - Din 's death in 1316, his eunuch general Malik Kafur, who was born in a Hindu family in India and had converted to Islam, tried to assume power. He lacked the support of Persian and Turkic nobility and was subsequently killed. The last Khalji ruler was Ala ud - Din Khalji 's 18 - year - old son Qutb ud - Din Mubarak Shah Khalji, who ruled for four years before he was killed by Khusro Khan, another of Ala ud - Din 's generals. Khusro Khan 's reign lasted only a few months, when Ghazi Malik, later to be called Ghiyath al - Din Tughlaq, killed him and assumed power in 1320, thus ending the Khalji dynasty and starting the Tughlaq dynasty.
The Tughlaq dynasty lasted from 1320 to nearly the end of the 14th century. The first ruler Ghazi Malik rechristened himself as Ghiyath al - Din Tughlaq and is also referred to in scholarly works as Tughlak Shah. He was of Turko - Indian origins; his father was a Turkic slave and his mother was a Hindu. Ghiyath al - Din ruled for five years and built a town near Delhi named Tughlaqabad. According to some historians such as Vincent Smith, he was killed by his son Juna Khan, who then assumed power in 1325. Juna Khan rechristened himself as Muhammad bin Tughlaq and ruled for 26 years. During his rule, Delhi Sultanate reached its peak in terms of geographical reach, covering most of the Indian subcontinent.
Muhammad bin Tughlaq was an intellectual, with extensive knowledge of the Quran, Fiqh, poetry and other fields. He was also deeply suspicious of his kinsmen and wazirs (ministers), extremely severe with his opponents, and took decisions that caused economic upheaval. For example, he ordered minting of coins from base metals with face value of silver coins - a decision that failed because ordinary people minted counterfeit coins from base metal they had in their houses and used them to pay taxes and jizya.
On another occasion, after becoming upset by some accounts, or to run the Sultanate from the centre of India by other accounts, Muhammad bin Tughlaq ordered the transfer of his capital from Delhi to Devagiri in modern - day Maharashtra (renaming it to Daulatabad), by forcing the mass migration of Delhi 's population. Those who refused were killed. One blind person who failed to move to Daulatabad was dragged for the entire journey of 40 days - the man died, his body fell apart, and only his tied leg reached Daulatabad. The capital move failed because Daulatabad was arid and did not have enough drinking water to support the new capital. The capital then returned to Delhi. Nevertheless, Muhammad bin Tughlaq 's orders affected history as a large number of Delhi Muslims who came to the Deccan area did not return to Delhi to live near Muhammad bin Tughlaq. This influx of the then - Delhi residents into the Deccan region led to a growth of Muslim population in central and southern India. Muhammad bin Tughlaq 's adventures in the Deccan region also marked campaigns of destruction and desecration of Hindu and Jain temples, for example the Swayambhu Shiva Temple and the Thousand Pillar Temple.
Revolts against Muhammad bin Tughlaq began in 1327, continued over his reign, and over time the geographical reach of the Sultanate shrunk. The Vijayanagara Empire originated in southern India as a direct response to attacks from the Delhi Sultanate., and liberated south India from the Delhi Sultanate 's rule. In 1337, Muhammad bin Tughlaq ordered an attack on China, sending part of his forces over the Himalayas. Few survived the journey, and they were executed upon their return for failing. During his reign, state revenues collapsed from his policies such as the base metal coins from 1329 - 1332. To cover state expenses, he sharply raised taxes. Those who failed to pay taxes were hunted and executed. Famines, widespread poverty, and rebellion grew across the kingdom. In 1338 his own nephew rebelled in Malwa, whom he attacked, caught, and flayed alive. By 1339, the eastern regions under local Muslim governors and southern parts led by Hindu kings had revolted and declared independence from the Delhi Sultanate. Muhammad bin Tughlaq did not have the resources or support to respond to the shrinking kingdom. The historian Walford chronicled Delhi and most of India faced severe famines during Muhammad bin Tughlaq 's rule in the years after the base metal coin experiment. By 1347, the Bahmani Sultanate had become an independent and competing Muslim kingdom in Deccan region of South Asia.
Muhammad bin Tughlaq died in 1351 while trying to chase and punish people in Gujarat who were rebelling against the Delhi Sultanate. He was succeeded by Firuz Shah Tughlaq (1351 -- 1388), who tried to regain the old kingdom boundary by waging a war with Bengal for 11 months in 1359. However, Bengal did not fall. Firuz Shah ruled for 37 years. His reign attempted to stabilize the food supply and reduce famines by commissioning an irrigation canal from the Yamuna river. An educated sultan, Firuz Shah left a memoir. In it he wrote that he banned the practice of torture, such as amputations, tearing out of eyes, sawing people alive, crushing people 's bones as punishment, pouring molten lead into throats, setting people on fire, driving nails into hands and feet, among others. He also wrote that he did not tolerate attempts by Rafawiz Shia Muslim and Mahdi sects from proselytizing people into their faith, nor did he tolerate Hindus who tried to rebuild temples that his armies had destroyed. As punishment for proselytizing, Firuz Shah put many Shias, Mahdi, and Hindus to death (siyasat). Firuz Shah Tughlaq also lists his accomplishments to include converting Hindus to Sunni Islam by announcing an exemption from taxes and jizya for those who convert, and by lavishing new converts with presents and honours. Simultaneously, he raised taxes and jizya, assessing it at three levels, and stopping the practice of his predecessors who had historically exempted all Hindu Brahmins from the jizya. He also vastly expanded the number of slaves in his service and those of Muslim nobles. The reign of Firuz Shah Tughlaq was marked by reduction in extreme forms of torture, eliminating favours to select parts of society, but also increased intolerance and persecution of targeted groups.
The death of Firuz Shah Tughlaq created anarchy and disintegration of the kingdom. The last rulers of this dynasty both called themselves Sultan from 1394 to 1397: Nasir ud - Din Mahmud Shah Tughlaq, the grandson of Firuz Shah Tughlaq who ruled from Delhi, and Nasir ud - Din Nusrat Shah Tughlaq, another relative of Firuz Shah Tughlaq who ruled from Firozabad, which was a few miles from Delhi. The battle between the two relatives continued till Timur 's invasion in 1398. Timur, also known as Tamerlane in Western scholarly literature, was the Turkic ruler of the Timurid Empire. He became aware of the weakness and quarrelling of the rulers of the Delhi Sultanate, so he marched with his army to Delhi, plundering and killing all the way. Estimates for the massacre by Timur in Dehli range from 100,000 to 200,000 people. Timur had no intention of staying in or ruling India. He looted the lands he crossed, then plundered and burnt Delhi. Over five days, Timur and his army raged a massacre. Then he collected and carried the wealth, captured women and slaves (particularly skilled artisans), and returned to Samarkand. The people and lands within the Delhi Sultanate were left in a state of anarchy, chaos, and pestilence. Nasir ud - Din Mahmud Shah Tughlaq, who had fled to Gujarat during Timur 's invasion, returned and nominally ruled as the last ruler of Tughlaq dynasty, as a puppet of various factions at the court.
The Sayyid dynasty was a Turkic dynasty that ruled the Delhi Sultanate from 1415 to 1451. The Timurid invasion and plunder had left the Delhi Sultanate in shambles, and little is known about the rule by the Sayyid dynasty. Annemarie Schimmel notes the first ruler of the dynasty as Khizr Khan, who assumed power by claiming to represent Timur. His authority was questioned even by those near Delhi. His successor was Mubarak Khan, who rechristened himself as Mubarak Shah and tried to regain lost territories in Punjab, unsuccessfully.
With the power of the Sayyid dynasty faltering, Islam 's history on the Indian subcontinent underwent a profound change, according to Schimmel. The previously dominant Sunni sect of Islam became diluted, alternate Muslim sects such as Shia rose, and new competing centres of Islamic culture took roots beyond Delhi.
The Sayyid dynasty was displaced by the Lodi dynasty in 1451.
The Lodi dynasty belonged to the Pashtun (Afghan) Lodi tribe. Bahlul Khan Lodi started the Lodi dynasty and was the first Pashtun, to rule the Delhi Sultanate. Bahlul Lodi began his reign by attacking the Muslim Jaunpur Sultanate to expand the influence of the Delhi Sultanate, and was partially successful through a treaty. Thereafter, the region from Delhi to Varanasi (then at the border of Bengal province), was back under influence of Delhi Sultanate.
After Bahlul Lodi died, his son Nizam Khan assumed power, rechristened himself as Sikandar Lodi and ruled from 1489 to 1517. One of the better known rulers of the dynasty, Sikandar Lodi expelled his brother Barbak Shah from Jaunpur, installed his son Jalal Khan as the ruler, then proceeded east to make claims on Bihar. The Muslim governors of Bihar agreed to pay tribute and taxes, but operated independent of the Delhi Sultanate. Sikandar Lodi led a campaign of destruction of temples, particularly around Mathura. He also moved his capital and court from Delhi to Agra, an ancient Hindu city that had been destroyed during the plunder and attacks of the early Delhi Sultanate period. Sikandar thus erected buildings with Indo - Islamic architecture in Agra during his rule, and the growth of Agra continued during the Mughal Empire, after the end of Delhi Sultanate.
Sikandar Lodi died a natural death in 1517, and his second son Ibrahim Lodi assumed power. Ibrahim did not enjoy the support of Afghan and Persian nobles or regional chiefs. Ibrahim attacked and killed his elder brother Jalal Khan, who was installed as the governor of Jaunpur by his father and had the support of the amirs and chiefs. Ibrahim Lodi was unable to consolidate his power, and after Jalal Khan 's death, the governor of Punjab, Daulat Khan Lodi, reached out to the Mughal Babur and invited him to attack Delhi Sultanate. Babur defeated and killed Ibrahim Lodi in the Battle of Panipat in 1526. The death of Ibrahim Lodi ended the Delhi Sultanate, and the Mughal Empire replaced it.
Before and during the Delhi Sultanate, Islamic civilization was the most cosmopolitan civilization of the Middle Ages. It had a multicultural and pluralistic society, and wide - ranging international networks, including social and economic networks, spanning large parts of Afro - Eurasia, leading to escalating circulation of goods, peoples, technologies and ideas. While initially disruptive due to the passing of power from native Indian elites to Turkic Muslim elites, the Delhi Sultanate was responsible for integrating the Indian subcontinent into a growing world system, drawing India into a wider international network, which led to cultural and social enrichment in the Indian subcontinent.
Economist Angus Maddison has estimated that, during the Medieval Delhi Sultanate era, between 1000 and 1500, India 's GDP grew nearly 80 % up to $60.5 billion in 1500.
The worm gear roller cotton gin was invented in the Indian subcontinent during the early Delhi Sultanate era of the 13th -- 14th centuries, and is still used in India through to the present day. Another innovation, the incorporation of the crank handle in the cotton gin, first appeared in the Indian subcontinent some time during the late Delhi Sultanate or the early Mughal Empire. The production of cotton, which may have largely been spun in the villages and then taken to towns in the form of yarn to be woven into cloth textiles, was advanced by the diffusion of the spinning wheel across India during the Delhi Sultanate era, lowering the costs of yarn and helping to increase demand for cotton. The diffusion of the spinning wheel, and the incorporation of the worm gear and crank handle into the roller cotton gin, led to greatly expanded Indian cotton textile production.
The Indian population had largely been stagnant at 75 million during the Middle Kingdoms era from 1 AD to 1000 AD. During the Medieval Delhi Sultanate era from 1000 to 1500, India experienced lasting population growth for the first time in a thousand years, with its population increasing nearly 50 % to 110 million by 1500 AD.
While the Indian subcontinent has had invaders from Central Asia since ancient times, what made the Muslim invasions different is that unlike the preceding invaders who assimilated into the prevalent social system, the successful Muslim conquerors retained their Islamic identity and created new legal and administrative systems that challenged and usually in many cases superseded the existing systems of social conduct and ethics, even influencing the non-Muslim rivals and common masses to a large extent, though the non-Muslim population was left to their own laws and customs. They also introduced new cultural codes that in some ways were very different from the existing cultural codes. This led to the rise of a new Indian culture which was mixed in nature, different from ancient Indian culture. The overwhelming majority of Muslims in India were Indian natives converted to Islam. This factor also played an important role in the synthesis of cultures.
The Hindustani language (Hindi - Urdu) began to emerge in the Delhi Sultanate period, developed from the Middle Indo - Aryan apabhramsha vernaculars of North India. Amir Khusro, who lived in the 13th century CE during the Delhi Sultanate period in North India, used a form of Hindustani, which was the lingua franca of the period, in his writings and referred to it as Hindavi.
The bulk of Delhi Sultanate 's army consisted of nomadic Turkic Mamluk military slaves, who were skilled in nomadic cavalry warfare. A major military contribution of the Delhi Sultanate was their successful campaigns in repelling the Mongol Empire 's invasions of India, which could have been devastating for the Indian subcontinent, like the Mongol invasions of China, Persia and Europe. The Delhi Sultanate 's Mamluk army were skilled in the same style of nomadic cavalry warfare used by the Mongols, making them successful in repelling the Mongol invasions, as was the case for the Mamluk Sultanate of Egypt. Were it not for the Delhi Sultanate, it is possible that the Mongol Empire may have been successful in invading India. The strength of the armies changes according to time. According to firishta during the battle of kili Alauddin led an army of 300,000 cavalry and 2,700 elephants. During the tughlaq period Muhammad bin tughlaq rose an army of 3 million. The soldiers used weapons such as swords, spears, shields etc. Armour such as steel helmet and chainmail was commonly used. Armored war elephants were effectively used against the enemies such as the Mongols.
Historian Richard Eaton has tabulated a campaign of destruction of idols and temples by Delhi Sultans, intermixed with instances of years where the temples were protected from desecration. In his paper, he has listed 37 instances of Hindu temples being desecrated or destroyed in India during the Delhi Sultanate, from 1234 to 1518, for which reasonable evidences are available. He notes that this was not unusual in medieval India, as there were numerous recorded instances of temple desecration by Hindu and Buddhist kings against rival Indian kingdoms between 642 and 1520, involving conflict between devotees of different Hindu deities, as well as between Hindus, Buddhists and Jains. He also noted there were also many instances of Delhi sultans, who often had Hindu ministers, ordering the protection, maintenance and repairing of temples, according to both Muslim and Hindu sources. For example, a Sanskrit inscription notes that Sultan Muhammad bin Tughluq repaired a Siva temple in Bidar after his Deccan conquest. There was often a pattern of Delhi sultans plundering or damaging temples during conquest, and then patronizing or repairing temples after conquest. This pattern came to an end with the Mughal Empire, where Akbar the Great 's chief minister Abu'l - Fazl criticized the excesses of earlier sultans such as Mahmud of Ghazni.
In many cases, the demolished remains, rocks and broken statue pieces of temples destroyed by Delhi sultans were reused to build mosques and other buildings. For example, the Qutb complex in Delhi was built from stones of 27 demolished Hindu and Jain temples by some accounts. Similarly, the Muslim mosque in Khanapur, Maharashtra was built from the looted parts and demolished remains of Hindu temples. Muhammad bin Bakhtiyar Khalji destroyed Buddhist and Hindu libraries and their manuscripts at Nalanda and Odantapuri Universities in 1193 AD at the beginning of the Delhi Sultanate.
The first historical record of a campaign of destruction of temples and defacement of faces or heads of Hindu idols lasted from 1193 through the early 13th century in Rajasthan, Punjab, Haryana and Uttar Pradesh under the command of Ghuri. Under the Khaljis, the campaign of temple desecration expanded to Bihar, Madhya Pradesh, Gujarat and Maharashtra, and continued through the late 13th century. The campaign extended to Telangana, Andhra Pradesh, Karnataka and Tamil Nadu under Malik Kafur and Ulugh Khan in the 14th century, and by the Bahmanis in 15th century. Orissa temples were destroyed in the 14th century under the Tughlaqs.
Beyond destruction and desecration, the sultans of the Delhi Sultanate in some cases had forbidden reconstruction of damaged Hindu, Jain and Buddhist temples, and they prohibited repairs of old temples or construction of any new temples. In certain cases, the Sultanate would grant a permit for repairs and construction of temples if the patron or religious community paid jizya (fee, tax). For example, a proposal by the Chinese to repair Himalayan Buddhist temples destroyed by the Sultanate army was refused, on the grounds that such temple repairs were only allowed if the Chinese agreed to pay jizya tax to the treasury of the Sultanate. In his memoirs, Firoz Shah Tughlaq describes how he destroyed temples and built mosques instead and killed those who dared build new temples. Other historical records from wazirs, amirs and the court historians of various Sultans of the Delhi Sultanate describe the grandeur of idols and temples they witnessed in their campaigns and how these were destroyed and desecrated.
|
the wildcat strike by employees of the pullman company in 1894 was | Pullman strike - wikipedia
The Pullman Strike was a nationwide railroad strike in the United States that lasted from May 11 to July 20, 1894, and a turning point for US labor law. It pitted the American Railway Union (ARU) against the Pullman Company, the main railroads, and the federal government of the United States under President Grover Cleveland. The strike and boycott shut down much of the nation 's freight and passenger traffic west of Detroit, Michigan. The conflict began in Pullman, Chicago, on May 11 when nearly 4,000 factory employees of the Pullman Company began a wildcat strike in response to recent reductions in wages.
Most factory workers who built Pullman cars lived in the "company town '' of Pullman on the South Side of Chicago, Illinois. The industrialist George Pullman had designed it ostensibly as a model community. Pullman had a diverse work force. He wanted to hire African - Americans for certain jobs at the company. Pullman used ads and other campaigns to help bring workers into his company.
When his company laid off workers and lowered wages, it did not reduce rents, and the workers called for a strike. Among the reasons for the strike were the absence of democracy within the town of Pullman and its politics, the rigid paternalistic control of the workers by the company, excessive water and gas rates, and a refusal by the company to allow workers to buy and own houses. They had not yet formed a union. Founded in 1893 by Eugene V. Debs, the ARU was an organization of unskilled railroad workers. Debs brought in ARU organizers to Pullman and signed up many of the disgruntled factory workers. When the Pullman Company refused recognition of the ARU or any negotiations, ARU called a strike against the factory, but it showed no sign of success. To win the strike, Debs decided to stop the movement of Pullman cars on railroads. The over-the - rail Pullman employees (such as conductors and porters) did not go on strike.
Debs and the ARU called a massive boycott against all trains that carried a Pullman car. It affected most rail lines west of Detroit and at its peak involved some 250,000 workers in 27 states. The Railroad brotherhoods and the American Federation of Labor (AFL) opposed the boycott, and the General Managers Association of the railroads coordinated the opposition. Thirty people were killed in response to riots and sabotage that caused $80 million in damages. The federal government obtained an injunction against the union, Debs, and other boycott leaders, ordering them to stop interfering with trains that carried mail cars. After the strikers refused, President Grover Cleveland ordered in the Army to stop the strikers from obstructing the trains. Violence broke out in many cities, and the strike collapsed. Defended by a team including Clarence Darrow, Debs was convicted of violating a court order and sentenced to prison; the ARU then dissolved.
During a severe recession (the Panic of 1893), the Pullman Palace Car Company cut wages as demand for new passenger cars plummeted and the company 's revenue dropped. A delegation of workers complained that wages had been cut but not rents at their company housing or other costs in the company town. The company owner, George Pullman, refused to lower rents or go to arbitration.
Many of the Pullman factory workers joined the American Railway Union (ARU), led by Eugene V. Debs, which supported their strike by launching a boycott in which ARU members refused to run trains containing Pullman cars. At the time of the strike approximately 35 % of Pullman workers were members of the ARU. The plan was to force the railroads to bring Pullman to compromise. Debs began the boycott on June 26, 1894. Within four days, 125,000 workers on twenty - nine railroads had "walked off '' the job rather than handle Pullman cars. The railroads coordinated their response through the General Managers ' Association, which had been formed in 1886 and included 24 lines linked to Chicago. The railroads began hiring replacement workers (strikebreakers), which increased hostilities. Many blacks were recruited as strikebreakers and crossed picket lines, as they feared that the racism expressed by the American Railway Union would lock them out of another labor market. This added racial tension to the union 's predicament.
On June 29, 1894, Debs hosted a peaceful meeting to rally support for the strike from railroad workers at Blue Island, Illinois. Afterward, groups within the crowd became enraged and set fire to nearby buildings and derailed a locomotive. Elsewhere in the western states, sympathy strikers prevented transportation of goods by walking off the job, obstructing railroad tracks, or threatening and attacking strikebreakers. This increased national attention and the demand for federal action.
Under direction from President Grover Cleveland, the US Attorney General Richard Olney dealt with the strike. Olney had been a railroad attorney, and still received a $10,000 retainer from the Chicago, Burlington and Quincy Railroad, in comparison to his $8,000 salary as Attorney General. Olney obtained an injunction in federal court barring union leaders from supporting the strike and demanding that the strikers cease their activities or face being fired. Debs and other leaders of the ARU ignored the injunction, and federal troops were called up to enforce it. While Debs had been reluctant to start the strike, he threw his energies into organizing it. He called a general strike of all union members in Chicago, but this was opposed by Samuel Gompers, head of the AFL, and other established unions, and it failed.
City by city the federal forces broke the ARU efforts to shut down the national transportation system. Thousands of United States Marshals and some 12,000 United States Army troops, commanded by Brigadier General Nelson Miles, took action. President Cleveland wanted the trains moving again, based on his legal, constitutional responsibility for the mails. His lawyers argued that the boycott violated the Sherman Antitrust Act, and represented a threat to public safety. The arrival of the military and the subsequent deaths of workers in violence led to further outbreaks of violence. During the course of the strike, 30 strikers were killed and 57 were wounded. Property damage exceeded $80 million.
The strike affected hundreds of towns and cities across the country. Railroad workers were divided, for the old established Brotherhoods, which included the skilled workers such as engineers, firemen and conductors, did not support the labor action. ARU members did support the action, and often comprised unskilled ground crews. In many areas townspeople and businessmen generally supported the railroads while farmers -- many affiliated with the Populists -- supported the ARU.
In Billings, Montana, an important rail center, a local Methodist minister, J.W. Jennings, supported the ARU. In a sermon he compared the Pullman boycott to the Boston Tea Party, and attacked Montana state officials and President Cleveland for abandoning "the faith of the Jacksonian fathers. '' Rather than defending "the rights of the people against aggression and oppressive corporations, '' he said party leaders were "the pliant tools of the codfish monied aristocracy who seek to dominate this country. '' Billings remained quiet but on July 10, soldiers reached Lockwood, Montana, a small rail center, where the troop train was surrounded by hundreds of angry strikers. Narrowly averting violence, the army opened the lines through Montana. When the strike ended, the railroads fired and blacklisted all the employees who had supported it.
In California the boycott was effective in Sacramento, a labor stronghold, but weak in the Bay Area and minimal in Los Angeles. The strike lingered as strikers expressed longstanding grievances over wage reductions, and indicate how unpopular the Southern Pacific Railroad was. Strikers engaged in violence and sabotage; the companies saw it as civil war while the ARU proclaimed it was a crusade for the rights of unskilled workers.
Public opinion was mostly opposed to the strike and supported Cleveland 's actions. Republicans and eastern Democrats supported Cleveland (the leader of the northeastern pro-business wing of the party), but southern and western Democrats as well as Populists generally denounced him. Chicago Mayor John Hopkins supported the strikers and stopped the Chicago Police from interfering before the strike turned violent. Governor John Peter Altgeld of Illinois, a Democrat, denounced Cleveland and said he could handle all disturbances in his state without federal intervention.
Media coverage was extensive and generally negative. A common trope in news reports and editorials depicted the boycotters as foreigners who contested the patriotism expressed by the militias and troops involved, as numerous recent immigrants worked in the factories and on the railroads. The editors warned of mobs, aliens, anarchy, and defiance of the law. The New York Times called it "a struggle between the greatest and most important labor organization and the entire railroad capital. '' In Chicago the established church leaders denounced the boycott, but some younger Protestant ministers defended it.
Debs was arrested on federal charges, including conspiracy to obstruct the mail as well as disobeying an order directed to him by the Supreme Court to stop the obstruction of railways and to dissolve the boycott. He was defended by Clarence Darrow, a prominent attorney, as well as Lyman Trumbull. At the conspiracy trial Darrow argued that it was the railways, not Debs and his union, that met in secret and conspired against their opponents. Sensing that Debs would be acquitted, the prosecution dropped the charge when a juror took ill. Although Darrow also represented Debs at the United States Supreme Court for violating the federal injunction, Debs was sentenced to six months in prison.
Early in 1895 General William M. Graham erected a memorial obelisk in the San Francisco National Cemetery at the Presidio, in honor of four soldiers of the 5th Artillery killed in a Sacramento train crash of July 11, 1894, during the strike. The train wrecked crossing a trestle bridge purportedly dynamited by union members. Graham 's monument included the inscription, "Murdered by Strikers '', a description he hotly defended. The obelisk remains in place.
In the aftermath of the Pullman Strike, the state ordered the company to sell off its residential holdings. In the decades after Pullman died (1897), Pullman became just another South Side neighborhood. It remained the area 's largest employer before closing in the 1950s. The area is both a National Historic Landmark as well as a Chicago Landmark District. Because of the significance of the strike, many state agencies and non-profit groups are hoping for many revivals of the Pullman neighborhoods starting with Pullman Park, one of the largest projects. It was to be a $350 million mixed used development on the site of an old steel plant. The plan was for 670,000 square feet of new retail space, 125,000 square foot neighborhood recreation center and 1,100 housing units.
Following his release from jail in 1895, ARU President Debs became a committed advocate of socialism, helping in 1897 to launch the Social Democracy of America, a forerunner of the Socialist Party of America. He ran for president in 1900 for the first of five times as head of the Socialist Party ticket.
Civil as well as criminal charges were brought against the organizers of the strike and Debs in particular, and the Supreme Court issued a unanimous decision, In re Debs, that rejected Debs ' actions. The Illinois Governor John P. Altgeld was incensed at Cleveland for putting the federal government at the service of the employers, and for rejecting Altgeld 's plan to use his state militia rather than federal troops to keep order.
Cleveland 's administration appointed a national commission to study the causes of the 1894 strike; it found George Pullman 's paternalism partly to blame and described the operations of his company town to be "un-American ''. In 1898, the Illinois Supreme Court forced the Pullman Company to divest ownership in the town, as its company charter did not authorize such operations, and the land was annexed to Chicago. Much of it is now designated as an historic district, which is listed on the National Register of Historic Places.
In 1894, in an effort to conciliate organized labor after the strike, President Grover Cleveland and Congress designated Labor Day as a federal holiday. Legislation for the holiday was pushed through Congress six days after the strike ended. Samuel Gompers, who had sided with the federal government in its effort to end the strike by the American Railway Union, spoke out in favor of the holiday.
|
who played the chinese kid in indiana jones | Jonathan Ke Quan - wikipedia
Jonathan Luke Ke Huy Quan (Vietnamese: Quan Kế Huy; Chinese: 關 繼 威; Cantonese: Gwāan Gaiwāi, Mandarin: Guān Jìwēi; born: August 20, 1971) is a Vietnamese - born American actor and stunt choreographer of Chinese descent. He is best known for his appearances in the 1980s Steven Spielberg productions of Indiana Jones and the Temple of Doom and The Goonies.
Quan was born in Saigon, South Vietnam (present - day Ho Chi Minh City, Vietnam). He was forced to leave his country when the Army of the Republic of Vietnam was defeated during the Fall of Saigon. His family was selected for political asylum and emigrated to the United States. He became a child actor at age 12, starring as Harrison Ford 's sidekick Short Round in Indiana Jones and the Temple of Doom. After being cast, his family changed his name to Ke Huy, the name by which he is credited in the film.
In 1985, he co-starred in The Goonies as a key member of the eponymous group of kids, the kid inventor Richard "Data '' Wang. He played a pickpocket orphan in the 1986 Taiwanese movie It Takes a Thief. In 1987, he appeared in the Japanese movie "Passengers '' (Passenjā Sugisarishi Hibi) with the Japanese idol singer Honda Minako. He played Sam on the short - lived TV series Together We Stand (1986 -- 1987) and played Jasper Kwong in the sitcom Head of the Class from 1989 to 1991. He also starred in the movie Breathing Fire (1991) and had a small role in Encino Man (1992). He played the starring role in the 1993 Mandarin language TV show "The Big Eunuch and the Little Carpenter '' which ran for forty episodes. He also starred in the 1996 Hong Kong / Vietnam collaboration movie Red Pirate. He last appeared onscreen in the 2002 Hong Kong movie Second Time Around alongside Ekin Cheng and Cecilia Cheung.
He attended Mount Gleason Jr. High in Tujunga, California, and Alhambra High School in Alhambra, California. After high school, he graduated from the University of Southern California School of Cinematic Arts. He later attended the University of Manchester, in the United Kingdom. He is fluent in Vietnamese, Cantonese, Mandarin, and English.
Having studied Taekwondo under Philip Tan on the set of Indiana Jones and the Temple of Doom, he later trained under Tao - liang Tan. He worked as a stunt choreographer for X-Men and The One as the assistant of renowned Hong Kong fight choreographer Corey Yuen.
|
actress in the movie star wars episode vi - return of the jedi | Return of the Jedi - Wikipedia
Return of the Jedi (also known as Star Wars: Episode VI -- Return of the Jedi) is a 1983 American epic space opera film directed by Richard Marquand. The screenplay by Lawrence Kasdan and George Lucas was from a story by Lucas, who was also the executive producer. It is the third installment in the original Star Wars trilogy and the first film to use THX technology. The film is set one year after The Empire Strikes Back and was produced by Howard Kazanjian for Lucasfilm Ltd. The film stars Mark Hamill, Harrison Ford, Carrie Fisher, Billy Dee Williams, Anthony Daniels, David Prowse, Kenny Baker, Peter Mayhew and Frank Oz.
The Galactic Empire, under the direction of the ruthless Emperor, is constructing a second Death Star in order to crush the Rebel Alliance once and for all. Since the Emperor plans to personally oversee the final stages of its construction, the Rebel Fleet launches a full - scale attack on the Death Star in order to prevent its completion and kill the Emperor, effectively bringing an end to the Empire 's hold over the galaxy. Meanwhile, Luke Skywalker, a Jedi Knight, struggles to bring his father Darth Vader back to the Light Side of the Force.
David Lynch and David Cronenberg were considered to direct the project before Marquand signed on as director. The production team relied on Lucas ' storyboards during pre-production. While writing the shooting script, Lucas, Kasdan, Marquand, and producer Howard Kazanjian spent two weeks in conference discussing ideas to construct it. Kazanjian 's schedule pushed shooting to begin a few weeks early to allow Industrial Light & Magic more time to work on the film 's effects in post-production. Filming took place in England, California, and Arizona from January to May 1982 (1982 - 05). Strict secrecy surrounded the production and the film used the working title Blue Harvest to prevent price gouging.
The film was released in theaters on May 25, 1983, six years to the day after the release of the first film, receiving mostly positive reviews. The film grossed between $475 million and $572 million worldwide. Several home video and theatrical releases and revisions to the film followed over the next 20 years. Star Wars continued with The Phantom Menace as part of the film series ' prequel trilogy.
A sequel, The Force Awakens, was released on December 18, 2015, as part of the new sequel trilogy.
Luke Skywalker initiates a plan to rescue Han Solo from the crime lord Jabba the Hutt with the help of Princess Leia, Lando Calrissian, Chewbacca, C - 3PO, and R2 - D2. Leia infiltrates Jabba 's palace on Tatooine, disguised as a bounty hunter, with Chewbacca as her prisoner. Leia releases Han from his carbonite prison, but she is captured and enslaved. Luke arrives soon afterward, but is also captured after a tense standoff. After Luke survives his battle with Jabba 's Rancor, Jabba sentences him and Han to death by feeding them to the Sarlacc. Luke frees himself and battles Jabba 's guards. During the chaos, Leia strangles Jabba to death, and Luke destroys Jabba 's sail barge as the group escapes. While the others rendezvous with the Rebel Alliance, Luke returns to Dagobah where he finds that Yoda is dying. Before he dies, Yoda confirms that Darth Vader, once known as Anakin Skywalker, is Luke 's father, and that there is "another Skywalker ''. The spirit of Obi - Wan Kenobi confirms that this other Skywalker is Leia, who is Luke 's twin sister. Obi - Wan tells Luke that he must fight Vader again to defeat the Empire.
The Rebel Alliance learns that the Empire has been constructing a new Death Star under the supervision of the Emperor himself. As the station is protected by an energy shield, Han leads a strike team to destroy the shield generator on the forest moon of Endor; doing so would allow a squadron of starfighters to destroy the Death Star. The strike team, accompanied by Luke and Leia, travels to Endor in a stolen Imperial shuttle. On Endor, Luke and his companions encounter a tribe of Ewoks and, after an initial conflict, gain their trust. Later, Luke tells Leia that she is his sister, Vader is their father, and that he must go and confront him. Surrendering to Imperial troops, Luke is brought to Vader and unsuccessfully tries to convince him to turn from the dark side of the Force.
Vader takes Luke to the Death Star to meet the Emperor, intent on turning him to the dark side. The Emperor reveals that the Death Star is actually fully operational and the rebel fleet will fall into a trap. On Endor, Han 's strike team is captured by Imperial forces, but a surprise counterattack by the Ewoks allows the rebels to battle the Imperials. Meanwhile, Lando leads the rebel fleet to the Death Star in the Millennium Falcon, only to find out that the shield is still active, and the Imperial fleet is waiting for them. The Emperor tempts Luke to give in to his anger, and Luke engages Vader in a lightsaber duel. Vader senses that Luke has a sister, and threatens to turn her to the dark side. Enraged, Luke attacks Vader and severs his prosthetic right hand. The Emperor entreats Luke to kill Vader and take his place, but Luke refuses, declaring himself a Jedi as his father had been. Furious, the Emperor tortures Luke with Force lightning. Unwilling to let his son die, Vader throws the Emperor down a reactor chute to his death, but Vader is mortally electrocuted in the process. At his last request, Luke removes the redeemed Anakin 's mask before he dies peacefully in Luke 's arms.
As the battle between the Imperial and Alliance fleets continues, the strike team defeats the Imperial forces and destroys the shield generator, allowing the rebel fleet to launch their assault on the Death Star. Lando leads a group of rebel ships into the Death Star 's core and destroys the main reactor. As Luke escapes on a shuttle with his father 's body, the Falcon flies out of the Death Star 's superstructure as the station explodes. On Endor, Leia reveals to Han that Luke is her brother, and they kiss. Luke returns to Endor and cremates Anakin 's body on a pyre. As the rebels celebrate their victory over the Empire, Luke smiles as he sees the spirits of Obi - Wan, Yoda, and Anakin watching over them.
Denis Lawson reprises his role as Wedge Antilles from Star Wars, and Kenneth Colley and Jeremy Bulloch reprise their roles as Admiral Piett and Boba Fett from The Empire Strikes Back. Michael Pennington portrays Moff Jerjerrod, the commander of the second Death Star. Warwick Davis appears as Wicket W. Warrick, an Ewok who leads Leia and eventually her friends to the Ewok tribe. Baker was originally cast as Wicket, but was replaced by Davis after falling ill with food poisoning on the morning of the shoot. Davis had no previous acting experience and was cast only after his grandmother had discovered an open call for dwarfs for the new Star Wars film. Caroline Blakiston portrays Mon Mothma, a co-founder and leader of the Rebel Alliance. Michael Carter played Jabba 's aide, Bib Fortuna (voiced by Erik Bauersfeld), while Femi Taylor and Claire Davenport appeared as Jabba 's original slave dancers.
To portray the numerous alien species featured in the film a multitude of puppeteers, voice actors, and stunt performers were employed. Admiral Ackbar was performed by puppeteer Timothy M. Rose, with his voice provided by Erik Bauersfeld. Nien Nunb was portrayed by Richard Bonehill in costume for full body shots, while he was otherwise a puppet operated by Mike Quinn and his voice was provided by Kipsang Rotich. Rose also operated Salacious Crumb, whose voice was provided by Mark Dodson. Quinn also played Ree - Yees and Wol Cabbashite. Sy Snootles was a marionette operated by Rose and Quinn, while her voice was provided by Annie Arbogast. Others included Simon J. Williamson as Max Rebo, a Gamorrean Guard and a Mon Calamari; Deep Roy as Droopy McCool; Ailsa Berk as Amanaman; Paul Springer as Ree - Yees, Gamorrean Guard and a Mon Calamari; Hugh Spight as a Gamorrean Guard, Elom and a Mon Calamari; Swee Lim as Attark the Hoover; Richard Robinson as a Yuzzum; Gerald Home as Tessek and the Mon Calamari officer; Phil Herbert as Hermi Odle; Tik and Tok (Tim Dry and Sean Crawford) as Whiphid and Yak - Face; Phil Tippett as the Rancor with Michael McCormick.
Jabba the Hutt was operated by Toby Philpott, David Barclay and Mike Edmonds (who also portrays the Ewok Logray) operated the tail. Larry Ward portrays the Huttese language voice with Mike Quinn, among other roles, controlling the eyes.
As with the previous film, Lucas personally financed Return of the Jedi. Lucas also chose not to direct Return of the Jedi himself, and started searching for a director for the film. Lucas approached David Lynch, who had been nominated for the Academy Award for Best Director for The Elephant Man in 1980, to helm Return of the Jedi, but Lynch declined in order to direct Dune. David Cronenberg was also offered the chance to direct, but he declined the offer to make Videodrome and The Dead Zone. Lamont Johnson, director of Spacehunter: Adventures in the Forbidden Zone, was also considered. Lucas eventually chose Richard Marquand. Lucas may have directed some of the second unit work personally as the shooting threatened to go over schedule; this is a function Lucas had willingly performed on previous occasions when he had only officially been producing a film (e.g. More American Graffiti, Raiders of the Lost Ark). Lucas did operate the B camera on the set a few times. Lucas himself has admitted to being on the set frequently because of Marquand 's relative inexperience with special effects. Lucas praised Marquand as a "very nice person who worked well with actors ''. Marquand did note that Lucas kept a conspicuous presence on set, joking, "It is rather like trying to direct King Lear -- with Shakespeare in the next room! ''
The screenplay was written by Lawrence Kasdan and Lucas (with uncredited contributions by David Peoples and Marquand), based on Lucas ' story. Kasdan claims he told Lucas that Return of the Jedi was "a weak title '', and Lucas later decided to name the film Revenge of the Jedi. The screenplay itself was not finished until rather late in pre-production, well after a production schedule and budget had been created by Kazanjian and Marquand had been hired, which was unusual for a film. Instead, the production team relied on Lucas ' story and rough draft in order to commence work with the art department. When it came time to formally write a shooting script, Lucas, Kasdan, Marquand and Kazanjian spent two weeks in conference discussing ideas; Kasdan used tape transcripts of these meetings to then construct the script.
The issue of whether Harrison Ford would return for the final film arose during pre-production. Unlike the other stars of the first film, Ford had not contracted to do two sequels, and Raiders of the Lost Ark had made him an even bigger star. Return of the Jedi producer Howard Kazanjian (who also produced Raiders of the Lost Ark) convinced Ford to return:
"I played a very important part in bringing Harrison back for Return of the Jedi. Harrison, unlike Carrie Fisher and Mark Hamill signed only a two picture contract. That is why he was frozen in carbonite in The Empire Strikes Back. When I suggested to George we should bring him back, I distinctly remember him saying that Harrison would never return. I said what if I convinced him to return. George simply replied that we would then write him in to Jedi. I had just recently negotiated his deal for Raiders of the Lost Ark with Phil Gersh of the Gersh Agency. I called Phil who said he would speak with Harrison. When I called back again, Phil was on vacation. David, his son, took the call and we negotiated Harrison 's deal. When Phil returned to the office several weeks later he called me back and said I had taken advantage of his son in the negotiations. I had not. But agents are agents. ''
Ford suggested that Han Solo be killed through self - sacrifice. Kasdan concurred, saying it should happen near the beginning of the film to instill doubt as to whether the others would survive, but Lucas was vehemently against it and rejected the concept. Gary Kurtz, who produced Star Wars and The Empire Strikes Back but was replaced as producer for Return of the Jedi by Kazanjian, said in 2010 that the ongoing success with Star Wars merchandise and toys led George Lucas to reject the idea of killing off Han Solo in the middle part of the film during a raid on an Imperial base. Luke Skywalker was also to have walked off alone and exhausted like the hero in a Spaghetti Western but, according to Kurtz, Lucas opted for a happier ending to encourage higher merchandise sales.
Yoda was originally not meant to appear in the film, but Marquand strongly felt that returning to Dagobah was essential to resolve the dilemma raised by the previous film. The inclusion led Lucas to insert a scene in which Yoda confirms that Darth Vader is Luke 's father because, after a discussion with a children 's psychologist, he did not want younger moviegoers to dismiss Vader 's claim as a lie. Many ideas from the original script were left out or changed. For instance, the Ewoks were going to be Wookiees, the Millennium Falcon would be used in the arrival at the forest moon of Endor, and Obi - Wan Kenobi would return to life from his spectral existence in the Force.
Filming began on January 11, 1982, and lasted through May 20, 1982, a schedule six weeks shorter than The Empire Strikes Back. Kazanjian 's schedule pushed shooting as early as possible in order to give Industrial Light & Magic (ILM) as much time as possible to work on effects, and left some crew members dubious of their ability to be fully prepared for the shoot. Working on a budget of $32.5 million, Lucas was determined to avoid going over budget as had happened with The Empire Strikes Back. Producer Howard Kazanjian estimated that using ILM (owned wholly by Lucasfilm) for special effects saved the production approximately $18 million. However, the fact that Lucasfilm was a non-union company made acquiring shooting locations more difficult and more expensive, even though Star Wars and The Empire Strikes Back had been big hits. The project was given the working title Blue Harvest with a tagline of "Horror Beyond Imagination. '' This disguised what the production crew was really filming from fans and the press, and also prevented price gouging by service providers.
The first stage of production started with 78 days at Elstree Studios in England, where the film occupied all nine stages. The shoot commenced with a scene later deleted from the finished film where the heroes get caught in a sandstorm as they leave Tatooine. (This was the only major sequence cut from the film during editing.) While attempting to film Luke Skywalker 's battle with the rancor beast, Lucas insisted on trying to create the scene in the same style as Toho 's Godzilla films by using a stunt performer inside a suit. The production team made several attempts, but were unable to create an adequate result. Lucas eventually relented and decided to film the rancor as a high - speed puppet. In April, the crew moved to the Yuma Desert in Arizona for two weeks of Tatooine exteriors. Production then moved to the redwood forests of northern California near Crescent City where two weeks were spent shooting the Endor forest exteriors, and then concluded at ILM in San Rafael, California for about ten days of bluescreen shots. One of two "skeletal '' post-production units shooting background matte plates spent a day in Death Valley. The other was a special Steadicam unit shooting forest backgrounds from June 15 -- 17, 1982, for the speeder chase near the middle of the film. Steadicam inventor Garrett Brown personally operated these shots as he walked through a disguised path inside the forest shooting at less than one frame per second. By walking at about 5 mph (8 km / h) and projecting the footage at 24 frame / s, the motion seen in the film appeared as if it were moving at around 120 mph (190 km / h).
John Williams composed and conducted the film 's musical score with performances by the London Symphony Orchestra. Orchestration credits also include Thomas Newman. The initial release of the film 's soundtrack was on the RSO Records label in the United States. Sony Classical Records acquired the rights to the classic trilogy scores in 2004 after gaining the rights to release the second trilogy soundtracks (The Phantom Menace and Attack of the Clones). In the same year, Sony Classical re-pressed the 1997 RCA Victor release of Return of the Jedi along with the other two films in the trilogy. The set was released with the new artwork mirroring the first DVD release of the film. Despite the Sony digital re-mastering, which minimally improved the sound heard only on high - end stereos, this 2004 release is essentially the same as the 1997 RCA Victor release.
Meanwhile, special effects work at ILM quickly stretched the company to its operational limits. While the R&D work and experience gained from the previous two films in the trilogy allowed for increased efficiency, this was offset by the desire to have the closing film raise the bar set by each of these films. A compounding factor was the intention of several departments of ILM to either take on other film work or decrease staff during slow cycles. Instead, as soon as production began, the entire company found it necessary to remain running 20 hours a day on six - day weeks in order to meet their goals by April 1, 1983. Of about 900 special effects shots, all VistaVision optical effects remained in - house, since ILM was the only company capable of using the format, while about 400 4 - perf opticals were subcontracted to outside effects houses. Progress on the opticals was severely delayed for a time when ILM rejected about 100,000 feet (30,000 m) of film when the film perforations failed image registration and steadiness tests.
Return of the Jedi 's theatrical release took place on May 25, 1983. It was originally slated to be May 27, but was subsequently changed to coincide with the date of the 1977 release of the original Star Wars film. With a massive worldwide marketing campaign, illustrator Tim Reamer created the image for the movie poster and other advertising. At the time of its release, the film was advertised on posters and merchandise as simply Star Wars: Return of the Jedi, despite its on - screen "Episode VI '' distinction. The original film was later re-released to theaters in 1985.
In 1997, for the 20th anniversary of the release of Star Wars (retitled Episode IV: A New Hope), Lucas released The Star Wars Trilogy: Special Edition. Along with the two other films in the original trilogy, Return of the Jedi was re-released on March 14, 1997, with a number of changes and additions, which included the insertion of several alien band members in Jabba 's throne room, the modification of the Sarlacc to include a beak, the replacement of music at the closing scene, and a montage of different alien worlds celebrating the fall of the Empire. According to Lucas, Return of the Jedi required fewer changes than the previous two films because it is more emotionally driven than the others.
The original teaser trailer for the film carried the name Revenge of the Jedi. In December 1982, Lucas decided that "Revenge '' was not appropriate as Jedi should not seek revenge and returned to his original title. By that time thousands of "Revenge '' teaser posters (with artwork by Drew Struzan) had been printed and distributed. Lucasfilm stopped the shipping of the posters and sold the remaining stock of 6,800 posters to Star Wars fan club members for $9.50.
Star Wars: Episode III -- Revenge of the Sith, released in 2005 as part of the prequel trilogy, later alluded to the dismissed title Revenge of the Jedi.
The original theatrical version of Return of the Jedi was released on VHS and Laserdisc several times between 1986 and 1995, followed by releases of the Special Edition in the same formats between 1997 and 2000. Some of these releases contained featurettes; some were individual releases of just this film, while others were boxed sets of all three original films.
On September 21, 2004, the Special Editions of all three original films were released in a boxed set on DVD. It was digitally restored and remastered, with additional changes made by George Lucas. The DVD also featured English subtitles, Dolby Digital 5.1 EX surround sound, and commentaries by George Lucas, Ben Burtt, Dennis Muren, and Carrie Fisher. The bonus disc included documentaries including Empire of Dreams: The Story of the Star Wars Trilogy and several featurettes including "The Characters of Star Wars '', "The Birth of the Lightsaber '', and "The Legacy of Star Wars ''. Also included were teasers, trailers, TV spots, still galleries, and a demo for Star Wars: Battlefront.
With the release of Star Wars: Episode III - Revenge of the Sith, which depicts how and why Anakin Skywalker turned to the dark side of the Force, Lucas once again altered Return of the Jedi to bolster the relationship between the original trilogy and the prequel trilogy. The original and 1997 Special Edition versions of Return of the Jedi featured British theatre actor Sebastian Shaw playing both the dying Anakin Skywalker and his ghost. In the 2004 DVD, Shaw 's portrayal of Anakin 's ghost is replaced by Hayden Christensen, who portrayed Anakin in Attack of the Clones and Revenge of the Sith. All three films in the original unaltered Star Wars trilogy were later released, individually, on DVD on September 12, 2006. These versions were originally slated to only be available until December 31, 2006, although they remained in print until May 2011 and were packaged with the 2004 versions again in a new box set on November 4, 2008. Although the 2004 versions in these sets each feature an audio commentary, no other extra special features were included to commemorate the original cuts. The runtime of the 1997 Special Edition of the film and all subsequent releases is approximately five minutes longer than the original theatrical version.
A Blu - ray Disc version of the Star Wars saga was announced for release in 2011 during Star Wars Celebration V. Several deleted scenes from Return of the Jedi were included for the Blu - ray version, including a sandstorm sequence following the Battle at the Sarlacc Pit, a scene featuring Moff Jerjerrod and Death Star officers during the Battle of Endor, and a scene where Darth Vader communicates with Luke via the Force as Skywalker is assembling his new lightsaber before he infiltrates Jabba 's palace. On January 6, 2011, 20th Century Fox Home Entertainment announced the Blu - ray release for September 2011 in three different editions and the cover art was unveiled in May.
On April 7, 2015, Walt Disney Studios, 20th Century Fox, and Lucasfilm jointly announced the digital releases of the six released Star Wars films. Walt Disney Studios Home Entertainment released Return of the Jedi through the iTunes Store, Amazon Video, Vudu, Google Play, and Disney Movies Anywhere on April 10, 2015.
Depending on sources, Return of the Jedi grossed between $475 million and $572 million worldwide. Box Office Mojo estimates that the film sold over 80 million tickets in the US in its initial theatrical run. On the review aggregator website Rotten Tomatoes, the film has an 80 % approval rating with an average score of 7.2 / 10 based on 84 reviews from critics. Its consensus states, "Though failing to reach the cinematic heights of its predecessors, Return of the Jedi remains an entertaining sci - fi adventure and a fitting end to the classic trilogy ''. On Metacritic, the film received a score of 53 / 100 based on 15 reviews from mainstream critics, indicating "mixed or average reviews. ''
Contemporary critics were largely positive. In 1983, film critic Roger Ebert gave the film four stars out of four, and James Kendrick of Q Network Film Desk described Return of the Jedi as "a magnificent experience. '' The film was also featured on the May 23, 1983, TIME magazine cover issue (where it was labeled "Star Wars III ''), where the reviewer Gerald Clarke said that while it was not as exciting as the first Star Wars film, it was "better and more satisfying '' than The Empire Strikes Back, now considered by many as the best of the original trilogy. Vincent Canby of The New York Times called it "by far the dimmest adventure of the lot ''. ReelViews. net 's James Berardinelli wrote about the 1997 special edition re-release that "Although it was great fun re-watching Star Wars and The Empire Strikes Back again on the big screen, Return of the Jedi does n't generate the same sense of enjoyment. And, while Lucas worked diligently to re-invigorate each entry into the trilogy, Jedi needs more than the patches of improved sound, cleaned - up visuals, and a few new scenes. Still, despite the flaws, this is still Star Wars, and, as such, represents a couple of lightly - entertaining hours spent with characters we have gotten to know and love over the years. Return of the Jedi is easily the weakest of the series, but its position as the conclusion makes it a must - see for anyone who has enjoyed its predecessor. ''
While the action set pieces -- particularly the Sarlacc battle sequence, the speeder bike chase on the Endor moon, the space battle between Rebel and Imperial pilots, and Luke Skywalker 's duel against Darth Vader -- are well - regarded, the ground battle between the Ewoks and Imperial stormtroopers remains a bone of contention. Fans are also divided on the likelihood of Ewoks (being an extremely primitive race of small creatures armed with sticks and rocks) defeating an armed ground force comprising the Empire 's "best troops ''. Lucas has defended the scenario, saying that the Ewoks ' purpose was to distract the Imperial troops and that the Ewoks did not really win. His inspiration for the Ewoks ' victory came from the Vietnam War, where the indigenous Vietcong forces prevailed against the technologically superior United States.
At the 56th Academy Awards in 1984, Richard Edlund, Dennis Muren, Ken Ralston, and Phil Tippett received the "Special Achievement Award for Visual Effects. '' Norman Reynolds, Fred Hole, James L. Schoppe, and Michael Ford were nominated for "Best Art Direction / Set Decoration ''. Ben Burtt received a nomination for "Best Sound Effects Editing ''. John Williams received the nomination for "Best Music, Original Score ''. Burtt, Gary Summers, Randy Thom and Tony Dawe all received the nominations for "Best Sound ''. At the 1984 BAFTA Awards, Edlund, Muren, Ralston, and Kit West won for "Best Special Visual Effects ''. Tippett and Stuart Freeborn were also nominated for "Best Makeup ''. Reynolds received a nomination for "Best Production Design / Art Direction ''. Burtt, Dawe, and Summers also received nominations for "Best Sound ''. Williams was also nominated "Best Album of Original Score Written for a Motion Picture or Television Special ''. The film also won for "Best Dramatic Presentation '', the older award for science fiction and fantasy in film, at the 1984 Hugo Awards.
The novelization of Return of the Jedi was written by James Kahn and was released on May 12, 1983, thirteen days before the film 's release.
A radio drama adaptation of the film was written by Brian Daley with additional material contributed by John Whitman and was produced for and broadcast on National Public Radio in 1996. It was based on characters and situations created by George Lucas and on the screenplay by Lawrence Kasdan and George Lucas. The first two Star Wars films were similarly adapted for National Public Radio in the early 1980s, but it was not until 1996 that a radio version of Return of the Jedi was heard. Anthony Daniels returned as C - 3PO, but Mark Hamill and Billy Dee Williams did not reprise their roles as they had for the first two radio dramas. They were replaced by newcomer Joshua Fardon as Luke Skywalker and character actor Arye Gross as Lando Calrissian. John Lithgow voiced Yoda, whose voice actor in the films has always been Frank Oz. Bernard Behrens returned as Obi - Wan Kenobi and Brock Peters reprised his role as Darth Vader. Veteran character actor Ed Begley, Jr. played Boba Fett. Edward Asner also guest - starred speaking only in grunts as the voice of Jabba the Hutt. The radio drama had a running time of three hours.
Principal production of the show was completed on February 11, 1996. Only hours after celebrating its completion with the cast and crew of the show, Daley died of pancreatic cancer. The show is dedicated to his memory.
The cast and crew recorded a get - well message for Daley, but the author never got the chance to hear it. The message is included as part of the Star Wars Trilogy collector 's edition box set.
Marvel Comics published a comic book adaptation of the film by writer Archie Goodwin and artists Al Williamson, Carlos Garzon, Tom Palmer, and Ron Frenz. The adaptation appeared in Marvel Super Special # 27 and as a four - issue limited series. It was later reprinted in a mass market paperback.
Lucasfilm adapted the story for a children 's book - and - record set. Released in 1983, the 24 - page Star Wars: Return of the Jedi read - along book was accompanied by a 331⁄3 rpm 7 - inch (18 cm) gramophone record. Each page of the book contained a cropped frame from the film with an abridged and condensed version of the story. The record was produced by Buena Vista Records.
Arnold, Alan. Once Upon a Galaxy: A Journal of Making the Empire Strikes Back. Sphere Books, London. 1980. ISBN 978 - 0 - 345 - 29075 - 5.
|
economic effects of world war 1 on germany | Economic History of World War I - wikipedia
The economic history of World War I covers the methods used by the First World War (1914 -- 1918), as well as related postwar issues such as war debts and reparations. It also covers the economic mobilization of labor, industry and agriculture. It deals with economic warfare such as the blockade of Germany, and with some issues closely related to the economy, such as military issues of transportation. For a broader perspective see Home front during World War I.
All of the powers in 1914 expected a short war; none had made any economic preparations for a long war, such as stockpiling food or critical raw materials. The longer the war went on, the more the advantages went to the Allies, with their larger, deeper, more versatile economies and better access to global supplies. As Broadberry and Harrison conclude, once stalemate set in late in 1914:
The Allies had much more potential wealth they could spend on the war. One estimate (using 1913 US dollars) is that the Allies spent $147 billion on the war and the Central Powers only $61 billion. Among the Allies, Britain and its Empire spent $47 billion and the U.S. $27 billion (America joined after the war started) while among the Central Powers, Germany spent $45 billion.
Total war demanded total mobilization of all the nation 's resources for a common goal. Manpower had to be channeled into the front lines (all the powers except the United States and Britain had large trained reserves designed just for that). Behind the lines labor power had to be redirected away from less necessary activities that were luxuries during a total war. In particular, vast munitions industries had to be built up to provide shells, guns, warships, uniforms, airplanes, and a hundred other weapons both old and new. Agriculture had to provide food for both civilians and for soldiers (some of whom had been farmers and needed to be replaced by women, children and the elderly who now did the work without animal assistance) and for horses to move supplies. Transportation in general was a challenge, especially when Britain and Germany each tried to intercept merchant ships headed for the enemy. Finance was a special challenge. Germany financed the Central Powers. Britain financed the Allies until 1916, when it ran out of money and had to borrow from the United States. The U.S. took over the financing of the Allies in 1917 with loans that it insisted be repaid after the war. The victorious Allies looked to defeated Germany in 1919 to pay reparations that would cover some of their costs. Above all, it was essential to conduct the mobilization in such a way that the short term confidence of the people was maintained, the long - term power of the political establishment was upheld, and the long - term economic health of the nation was preserved.
Gross domestic product (GDP) increased for three Allies (Britain, Italy, and U.S.), but decreased in France and Russia, in neutral Netherlands, and in the three main Central Powers. The shrinkage in GDP in Austria, Russia, France, and the Ottoman Empire reached 30 to 40 %. In Austria, for example, most pigs were slaughtered, so at war 's end there was no meat.
The Western Front quickly stabilized, with almost no movement of more than a few hundred yards. The greatest single expenditure on both sides was for artillery shells, the chief weapon in the war. Since the front was highly stable, both sides built elaborate railway networks that brought supplies within a mile or two of the front lines, with horse - drawn wagons used for the final deliveries. In the ten - month battle at Verdun, the French and Germans fired some 10 million shells in all, weighing 1.4 million tons of steel.
The German counter-blockade with U-Boats was defeated by the convoy system and massive American ship building. Britain paid the war costs of most of its Allies until it ran out of money, then the US took over, funding those Allies and Britain as well.
The economy (in terms of GDP) grew about 7 % from 1914 to 1918 despite the absence of so many men in the services; by contrast the German economy shrank 27 %. The War saw a decline of civilian consumption, with a major reallocation to munitions. The government share of GDP soared from 8 % in 1913 to 38 % in 1918 (compared to 50 % in 1943).
Despite fears in 1916 that munitions production was lagging, the output was more than adequate. The annual output of artillery grew from 91 guns in 1914 to 8039 in 1918. Warplanes soared from 200 in 1914 to 3200 in 1918, while the production of machine guns went from 300 to 121,000.
In 1915, the Anglo - French Financial Commission agreed a $500 million loan from private American banks. By 1916, Britain was funding most of the Empire 's war expenditures, all of Italy 's and two thirds of the war costs of France and Russia, plus smaller nations as well. The gold reserves, overseas investments and private credit then ran out forcing Britain to borrow $4 billion from the U.S. Treasury in 1917 -- 18. Shipments of American raw materials and food allowed Britain to feed itself and its army while maintaining her productivity. The financing was generally successful, as the City 's strong financial position minimized the damaging effects of inflation, as opposed to much worse conditions in Germany. Overall consumer consumption declined 18 % from 1914 to 1919.
Trade unions were encouraged as membership grew from 4.1 million in 1914 to 6.5 million in 1918, peaking at 8.3 million in 1920 before relapsing to 5.4 million in 1923. Women were available and many entered munitions factories and took other home front jobs vacated by men.
Energy was a critical factor for the British war effort. Most of the energy supplies came from coal mines in Britain, where the issue was labour supply. Critical however was the flow of oil for ships, lorries and industrial use. There were no oil wells in Britain so everything was imported. The U.S. pumped two - thirds of the world 's oil. In 1917, total British consumption was 827 million barrels, of which 85 percent was supplied by the United States, and 6 percent by Mexico. The great issue in 1917 was how many tankers would survive the German u-boats. Convoys and the construction of new tankers solved the German threat, while tight government controls guaranteed that all essential needs were covered. An Inter-Allied Petroleum Conference allocated American supplies to Britain, France and Italy.
An oil crisis occurred in Britain due to the 1917 German submarine campaign. Standard Oil of NJ, for example, lost 6 tankers (including the brand new John D. Archbold) between May and September. The only solution to the crisis lay with increased oil shipment from America. The Allies formed the Inter-Allied Petroleum Conference with USA, Britain, France, and Italy as the members. Standard and Royal Dutch / Shell ran it and made it work. The introduction of convoys as an antidote to the German U-boats and the joint management system by Standard Oil and Royal Dutch / Shell helped to solve the Allies ' supply problems. The close working relationship that evolved was in marked contrast to the feud between the government and Standard Oil years earlier. In 1917 and 1918, there was increased domestic demand for oil partly due to the cold winter that created a shortage of coal. Inventories and imported oil from Mexico were used to close the gap. In January 1918, the U.S. Fuel Administrator ordered industrial plants east of Mississippi to close for a week to free up oil for Europe.
Fuel oil for the Royal Navy was the highest priority. In 1917, the Royal Navy consumed 12,500 tons a month, but had a supply of 30,000 tons a month from the Anglo - Persian Oil Company, using their oil wells in Persia.
Clydeside shipyards before 1914 had been the busiest in the world, turning out more than a third of the entire British output. They expanded by a third during the war, primarily to produce transports of the sort that German U-boats were busy sinking. Confident of postwar expansion, the companies borrowed heavily to expand their facilities. But after the war, employment tumbled as the yards proved too big, too expensive, and too inefficient; in any case world demand was down. The most skilled craftsmen were especially hard hit, because there were few alternative uses for their specialized skills.
Ireland was on the verge of civil war in 1914 after Parliament voted a home rule law that was intensely opposed by the Protestants, especially those in Ulster. When the war broke out the law was suspended and Protestants gave very strong support for the war in terms of military service and industrial output.
Occurring during Ireland 's Revolutionary period, the Irish Catholic experience of the war was complex and its memory of it divisive. At the outbreak of the war, most Irish people, regardless of political affiliation, supported the war in much the same way as their British counterparts, and both nationalist and unionist leaders initially backed the British war effort. Their followers, both Catholic and Protestant, served extensively in the British forces, many in three specially raised divisions. Over 200,000 Irishmen fought in the war, in several theatres with 30,000 deaths. In 1916, Catholic supporters of Irish independence from the United Kingdom took the opportunity of the ongoing war to proclaim an Irish Republic and to defend it in an armed rebellion against British rule in Dublin. The rebellion was poorly planned and quickly suppressed. The British executed most of the prisoners which caused Catholic opinion to surge in favour of independence. Britain 's intention to impose conscription in Ireland in 1918 provoked widespread resistance and as a result remained unimplemented.
The Commonwealth nations and India all played major roles. The Asian and African colonies provided large numbers of civilian workers, as well as some soldiers. The Indian Army during World War I contributed a large number of divisions and independent brigades to the European, Mediterranean and the Middle East theatres of war. Over one million Indian troops served overseas, of whom 62,000 died and another 67,000 were wounded.
Canada was prosperous during the war but ethnic conflict escalated almost out of control. In terms of long - run economic trends, the war hardly affected the direction or the speed of change. The trajectory of the main economic factors, the business and financial system, and the technology continued on their way. Women temporarily took war jobs, and at the end of the war there was a great deal of unrest among union members and farmers for a few years.
Billy Hughes, prime minister from October 1915, expanded the government 's role in the economy, while dealing with intense debates over the issue of conscription. Historian Gerhard Fisher argues that the Hughes government aggressively promoted economic, industrial, and social modernization. However, Fischer also says it was done by means of exclusion and repression. He says the war turned a peaceful nation into "one that was violent, aggressive, angst - and conflict - ridden, torn apart by invisible front lines of sectarian division, ethnic conflict and socio - economic and political upheaval. ''
In 1914 the Australian economy was small but the population of five million was very nearly the most prosperous in the world per capita. The nation depended on the export of wool, mutton, wheat and minerals. London provided assurances that it would underwrite the war risk insurance for shipping in order to allow trade amongst the Commonwealth to continue in the face of the German u-boat threat. London imposed controls so that no exports would wind up in German hands. The British government protected prices by buying Australian products even though the shortage of shipping meant that there was no chance that they would ever receive them. On the whole Australian commerce expanded. In terms of value, Australian exports rose almost 45 per cent, while the number of Australians employed in the manufacturing industry increased over 11 per cent. Iron mining and steel manufacture grew enormously. Inflation became a factor as consumer prices went up, while the cost of exports was deliberately kept lower than market value in an effort to prevent further inflationary pressures worldwide. As a result, the cost of living for many average Australians was increased.
The trade union movement, already powerful grew rapidly, though the movement split on the political question of conscription. Despite the considerable rises in the costs of many basic items, the government sought to stabilize wages, much to the anger of unionists. the average weekly wage during the war was increased by between 8 -- 12 per cent, it was not enough to keep up with inflation and as a result there was considerable discontent amongst workers, to the extent that industrial action followed. Not all of these disputes were due to economic factors, and indeed in some part they were the result of violent opposition to the issue of conscription, which many trade unionists were opposed to. Nevertheless, the result was very disruptive and it has been estimated that between 1914 and 1918 there were 1,945 industrial disputes, resulting in 8,533,061 working days lost and £ 4,785,607 in lost wages.
The cost of the war was £ 377 million, of which 70 % was borrowed and the rest came from taxes. Overall, the war had a significantly negative impact on the Australia economy. Real aggregate Gross Domestic Product (GDP) declined by 9.5 percent over the period 1914 to 1920, while the mobilization of personnel resulted in a 6 percent decline in civilian employment. Meanwhile, although population growth continued during the war years, it was only half that of the prewar rate. Per capita incomes also declined sharply, failing by 16 percent.
South Africa 's main economic role was in the supply of two - thirds of the gold production in the British Empire (most of the remainder came from Australia). When the war began Bank of England officials worked with the government of South Africa to block any gold shipments to Germany, and force the mine owners to sell only to the Treasury, at prices set by the Treasury. This facilitated purchases of munitions and food in the U.S, and other neutrals. By 1919 London lost control to the mining companies (which were now backed by the South African government). They wanted the higher prices and sales to New York that a free market would provide.
The Germans invaded Belgium at the start of the war and Belgium remained occupied for the entire war. There was both large - scale spontaneous militant and passive resistance. Over a 1.4 million refugees fled to France or to neutral Netherlands. Over half the German regiments in Belgium were involved in major incidents. After the atrocities by the German army in the first few weeks of the war, German civil servants took control and were generally correct, albeit strict and severe. Belgium was heavily industrialized; while farms operated and small shops stayed open some large establishments shut down or drastically reduced their output. The faculty closed the universities; many publishers shut down their newspapers. Most Belgians "turned the four war years into a long and extremely dull vacation, '' according to Kossmann. In 1916 Germany deported 120,000 men to work in Germany; this lead to protests from neutral countries and they were returned. Germany then stripped some factories of useful machinery, and used the rest as scrap iron for its steel mills.
At the start of war, silver 5 franc coins were collected and melted down by the National Bank to augment its silver reserves. They were exchangeable for paper banknotes, and later zinc coins, although many demonetized silver coins were hoarded. With the German invasion, the National Bank 's reserves were transferred to Antwerp and eventually to England where they were deposited at the Bank of England. Throughout the German occupation there was a shortage of official coins and banknotes in circulation, and so around 600 communes, local governments and companies issued their own unofficial "Necessity Money '' to enable the continued functioning of the local economies. The Belgian franc was fixed at an exchange rate of 1 franc to 1.25 German mark, which was also introduced as legal tender.
Neutral countries led by the United States set up the Commission for Relief in Belgium, headed by American engineer Herbert Hoover. It shipped in large quantities of food and medical supplies, which it tried to reserve for civilians and keep out of the hands of the Germans. Many businesses collaborated with the Germans. The government set up judicial proceedings to punish the collaborators.
Rubber had long been the main export of the Belgian Congo and production levels held up during the war but its importance fell from 77 % of exports (by value) to only 15 %. New resources were opened, especially copper mining in Katanga Province. The Union Minière du Haut Katanga company dominated the copper industry, exporting its product along a direct rail line to the sea at Beira. The war caused a heavy demand for copper, and production soared from 997 tons in 1911 to 27,000 tons in 1917, then fell off to 19,000 tons in 1920. Smelters operate at Elisabethville. Before the war the copper was sold to Germany and, in order to prevent loss of capacity, the British purchased all the Congo 's wartime output with the revenues going to the Belgian government in exile. Diamond and gold mining also expanded during the war. The Anglo - Dutch firm Lever Bros. greatly expanded the palm oil business during the war and there was an increased output of cocoa, rice and cotton. New rail and steamship lines opened to handle the expanded export traffic.
The German invasion captured 40 % of France 's heavy industry in 1914, especially in steel and coal. The French GDP in 1918 was 24 % smaller than in 1913; since a third went into the war effort, the civilian standard of living fell in half. But thousands of little factories opened up across France, hiring women, youth, elderly, disabled veterans, and behind the lines soldiers. Algerian and Vietnamese laborers were brought in. Plants produced 200,000 75mm shells a day. The US provided much food, steel, coal and machine tools, and $3.6 billion in loans to finance it all; the British loaned another $3 billion.
Considerable relief came with the influx of American food, money and raw materials in 1917. The economy was supported after 1917 by American government loans which were used to purchase foods and manufactured goods. The arrival of over a million American soldiers in 1918 brought heavy spending for food and construction materials.
France 's diverse regions suffered in different ways. While the occupied area in 1913 contained only 14 % of France 's industrial workers, it produced 58 % of the steel, and 40 % of the coal. War contracts made some firms prosperous but on the whole did not compensate for the loss of foreign markets. There was a permanent loss of population caused by battle deaths and emigration.
The economy of Algeria was severely disrupted. Internal lines of communication and transportation were disrupted, and shipments of the main export, cheap wine, had to be cut back. Crime soared as French forces were transferred to the Western Front, and there was rioting in the province of Batna. Shortages mounted, inflation soared, banks cut off credit, and the provincial government was ineffective.
The French government floated four war bond issues on the London market and raised 55 million pounds. These bonds were denominated in francs instead of pounds or gold, and were not guaranteed against exchange rate fluctuations. After the war franc lost value and the British bondholders tried, and failed, to get restitution.
J.P. Morgan & Co. of New York was the major American financier for the Allies, and worked closely with French bankers. However its dealings became strained because of growing misunderstandings between the Wall Street bankers and French bankers and diplomats.
French colonies supplied workers for munitions factories and other jobs in France. A famous example was Ho Chi Minh who worked in Paris, and was highly active in organizing fellow Vietnamese, and even demanding a voice for them at the Paris Peace Conference in 1919. The French army enlisted hundreds of thousands of colonials. From Africa came 212,000 soldiers, of whom 160,000 fought on the Western front.
The rapid unplanned buildup of French military operations in Africa disrupted normal trade relations and all the colonies, especially disrupting food supplies for the cities and distorting the local labor markets. French administrators, focused on supporting the armies on the Western Front, disregarded or suppressed protest movements.
The Russian economy was far too backward to sustain a major war, and conditions deteriorated rapidly, despite financial aid from Britain. By late 1915 there was a severe shortage of artillery shells. The very large but poorly equipped Russian army fought tenaciously and desperately despite its poor organisation and lack of munitions. Casualties were enormous. By 1915, many soldiers were sent to the front unarmed, and told to pick up whatever weapons they could from the battlefield.
The onset of World War I exposed the poor administrative skills of the czarist government under Nicholas II. A show of national unity had accompanied Russia 's entrance into the war, with defense of the Slavic Serbs the main battle cry. In the summer of 1914, the Duma and the zemstva expressed full support for the government 's war effort. The initial conscription was well organized and peaceful, and the early phase of Russia 's military buildup showed that the empire had learned lessons from the Russo - Japanese War. But military reversals and the government 's incompetence soon soured much of the population. Enemy control of the Baltic Sea and the Black Sea severed Russia from most of its foreign supplies and markets.
Russia had not prepared for a major war and Reacted very slowly as problems mounted in 1914 - 16. Inflation became a serious problem. Because of inadequate material support for military operations, the War Industry Committees were formed to ensure that necessary supplies reached the front. But army officers quarreled with civilian leaders, seized administrative control of front areas, and refused to cooperate with the committee. The central government distrusted the independent war support activities that were organized by zemstva and cities. The Duma quarreled with the war bureaucracy of the government, and center and center - left deputies eventually formed the Progressive Bloc to create a genuinely constitutional government. While the central government was hampered by court intrigue, the strain of the war began to cause popular unrest. Food shortages increasingly impacted urban areas, caused by military purchases, transportation bottlenecks, financial confusion, and administrative mismanagement. By 1915 high food prices and fuel shortages caused strikes in some cities. Food riots became more common and more violent, and ready the angry populace for withering political attacks on the czarist regime. Workers, who had won the right to representation in sections of the War Industries Committee, used those sections to mobilize political opposition. The countryside also was becoming restive. Soldiers were increasingly insubordinate, particularly the newly recruited peasants who faced the prospect of being used as cannon fodder in the inept conduct of the war.
The bad situation continued to deteriorate. Increasing conflict between the tsar and the Duma destroyed popular and elite support for the old regime. In early 1917, deteriorating rail transport caused acute food and fuel shortages, which resulted in escalating riots and strikes. Authorities summoned troops to quell the disorders in Petrograd (as St. Petersburg had been called since September 1914, to Russianize the Germanic name). In 1905 troops had fired on demonstrators and saved the monarchy, but in 1917 the troops turned their guns over to the angry crowds. Public support for the tsarist regime simply evaporated in 1917, ending three centuries of Romanov rule.
Italy joined the Allies in 1915, but was poorly prepared for war. Loans from Britain paid for nearly all its war expenses. The Italian army of 875,000 men was poorly led and lacked heavy artillery and machine guns. The industrial base was too small to provide adequate amounts of modern equipment, and the old - fashioned rural base did not produce much of a food surplus.
Before the war the government had ignored labor issues, but now it had to intervene to mobilize war production. With the main working - class Socialist party reluctant to support the war effort, strikes were frequent and cooperation was minimal, especially in the Socialist strongholds of Piedmont and Lombardy. The government imposed high wage scales, as well as collective bargaining and insurance schemes. Many large firms expanded dramatically. The workforce at the Ansaldo munitions company grew from 6,000 to 110,000 as it manufactured 10,900 artillery pieces, 3,800 warplanes, 95 warships and 10 million artillery shells. At Fiat the workforce grew from 4,000 to 40,000. Inflation doubled the cost of living. Industrial wages kept pace but not wages for farm workers. Discontent was high in rural areas since so many men were taken for service, industrial jobs were unavailable, wages grew slowly and inflation was just as bad.
In terms of munitions production, the 15 months after April 1917 involved an amazing parade of mistakes, misguided enthusiasm, and confusion. Americans were willing enough, but they did not know their proper role. Wilson was unable to figure out what to do when, or even to decide who was in charge. Typical of the confusion was the coal shortage that hit in December 1917. Because coal was by far the most major source of energy and heat, a grave crisis ensued. There was in fact plenty of coal being mined, but 44,000 loaded freight and coal cars were tied up in horrendous traffic jams in the rail yards of the East Coast. Two hundred ships were waiting in New York harbor for cargo that was delayed by the mess. The solution included nationalizing the coal mines and the railroads for the duration, shutting down factories one day a week to save fuel, and enforcing a strict system of priorities. Only in March 1918 did Wilson finally take control of the crisis
The war saw many women gaining access to and taking on jobs traditionally assigned to men. Many worked on the assembly lines of factories, producing trucks and munitions. The morale of the women remained high, as millions join the Red Cross as volunteers to help soldiers and their families. With rare exceptions, the women did not protest the draft. For the first time, department stores employed African American women as elevator operators and cafeteria waitresses.
Samuel Gompers, head of the AFL, and nearly all labor unions were strong supporters of the war effort. They minimized strikes as wages soared and full employment was reached. The AFL unions strongly encouraged their young men to enlist in the military, and fiercely opposed efforts to reduce recruiting and slow war production by the anti-war labor union called the Industrial Workers of the World (IWW) and also left - wing Socialists. President Wilson appointed Gompers to the powerful Council of National Defense, where he set up the War Committee on Labor. The AFL membership soared to 2.4 million in 1917. In 1919, the Union tried to make their gains permanent and called a series of major strikes in meat, steel and other industries. The strikes, all of which failed, forced unions back to their position around 1910.
While Germany rapidly mobilized its soldiers, it had to improvise the mobilization of the civilian economy for the war effort. It was severely handicapped by the British blockade that cut off food supplies, machinery and raw materials.
Walter Rathenau played the key role in convincing the War Ministry to set up the War Raw Materials Department (Kriegsrohstoffabteilung -- "KRA ''); he was in charge of it from August 1914 to March 1915 and established the basic policies and procedures. His senior staff were on loan from industry. KRA focused on raw materials threatened by the British blockade, as well as supplies from occupied Belgium and France. It set prices and regulated the distribution to vital war industries. It began the development of ersatz raw materials. KRA suffered many inefficiencies caused by the complexity and selfishness KRA encountered from commerce, industry, and the government. Some two dozen additional agencies were created dealing with specific products; the agencies could confiscate supplies and redirect them to the munitions factories. Cartels were created and small firms merged into larger ones for greater efficiency and ease of central control.
The military took an increasingly dominant role in setting economic priorities and in direct control of vital industries. It was usually inefficient, but it performed very well in aircraft. The army set prices and wages, gave out draft exemptions, guaranteed the supply of credit and raw materials, limited patent rights, and supervised management -- labor relationships. The industry expanded very rapidly with high quality products and many innovations, and paid wages well above the norm for skilled workers.
Total spending by the national government reached 170 billion marks during the war, of which taxes covered only 8 %, and the rest was borrowed from German banks and private citizens. Eight national war loans reached out to the entire population and raised 100 million marks. It proved almost impossible to borrow money from outside. The national debt rose from only 5 billion marks in 1914 to 156 billion in 1918. These bonds became worthless in 1923 because of hyperinflation.
As the war went on conditions deteriorated rapidly on the home front, with severe food shortages reported in all urban areas by 1915. Causes involved the transfer of many farmers and food workers into the military, an overburdened railroad system, shortages of coal, and the British blockade that cut off imports from abroad. The winter of 1916 -- 1917 was known as the "turnip winter '', because that vegetable, usually fed to livestock, was used by people as a substitute for potatoes and meat, which were increasingly scarce. Thousands of soup kitchens were opened to feed the hungry people, who grumbled that the farmers were keeping the food for themselves. Even the army had to cut the rations for soldiers. Morale of both civilians and soldiers continued to sink.
In the Ottoman Empire Turkish nationalists took control before the war began. They drove out Greeks and Armenians who had been the backbone of the business community, replacing them with ethnic Turks who were given favorable contracts but who lacked the international connections, credit sources, and entrepreneurial skills needed for business. The Ottoman economy was based on subsistence agriculture; there was very little industry. Turkish wheat was in high demand, but transportation was rudimentary and not much of it reached Germany. The war cut off imports except from Germany. Prices quadrupled. The Germans provided loans and supplied the army with hardware, especially captured Belgian and Russian equipment. Other supplies were in short supply; the soldiers were often in rags. Medical services were very bad and illness and death rates were high. Most of the Ottoman soldiers deserted when they had the opportunity, so the force level shrank from a peak strength of 800,000 in 1916 to only 100,000 in 1918.
The Astro - Hungarian monarchical personal union of the two countries was a result of the Compromise of 1867. Kingdom of Hungary lost its former status after the Hungarian Revolution of 1848. However following the 1867 reforms, the Austrian and the Hungarian states became co-equal within the Empire. Austria - Hungary was geographically the second - largest country in Europe after the Russian Empire, at 621,538 km (239,977 sq mi), and the third-most populous (after Russia and the German Empire). In comparison with Germany and Britain, the Austro - Hungarian economy lagged behind considerably, as sustained modernization had begun much later in Austria - Hungary. The Empire built up the fourth - largest machine building industry of the world, after the United States, Germany, and Britain. Austria - Hungary was also the world 's third largest manufacturer and exporter of electric home appliances, electric industrial appliances and facilities for power plants, after the United States and the German Empire.
The Empire of Austria and the Kingdom of Hungary had always maintained separate parliaments: the Imperial Council (Austria) and the Diet of Hungary. Except for the Pragmatic Sanction of 1713, common laws never existed in the Empire of Austria and the Kingdom of Hungary.
There was no common citizenship: one was either an Austrian citizen or a Hungarian citizen, never both. Austria and Hungary were fiscally sovereign and independent entities. The Kingdom of Hungary could preserve its separated and independent budget.
However, by the end of the 19th century, economic differences gradually began to even out as economic growth in the eastern parts of the Empire consistently surpassed that in the western. The strong agriculture and food industry of the Kingdom of Hungary with the centre of Budapest became predominant within the empire and made up a large proportion of the export to the rest of Europe. Meanwhile, western areas, concentrated mainly around Prague and Vienna, excelled in various manufacturing industries. This division of labour between the east and west, besides the existing economic and monetary union, led to an even more rapid economic growth throughout Austria - Hungary by the early 20th century. Austria could preserve its dominance within the empire in the sectors of the first industrial revolution, but Hungary had a better position in the industries of the second industrial revolution, in these modern industrial sectors the Austrian competition could not become overwhelming.
The empire 's heavy industry had mostly focused on machine building, especially for the electric power industry, locomotive industry and automotive industry, while in light industry the precision mechanics industry was the most dominant.
During the war the national governments of Vienna and Budapest set up a highly centralized war economy, resulting in a bureaucratic dictatorship. It drafted skilled workers and engineers without realizing the damage it did to the economy.
The Czech region had a more advanced economy, but was reluctant to support the war effort. Czechs rejected any customs union with Germany, because it threatened their language and culture. Czech bankers had an eye to early independence; they purchased many securities from the Czech lands, thus insuring their strong domestic position in what became Czechoslovakia in 1918.
Bulgaria, a poor rural nation of 4.5 million people, at first stayed neutral. In 1915 it joined the Central Powers. It mobilized a very large army of 800,000 men, using equipment supplied by Germany. Bulgaria was ill - prepared for a long war; absence of so many soldiers sharply reduced agricultural output. Much of its best food was smuggled out to feed lucrative black markets elsewhere. By 1918 the soldiers were not only short of basic equipment like boots but they were being fed mostly corn bread with a little meat. The peace treaty in 1919 stripped Bulgaria of its conquests, reduced its army to 20,000 men, and demanded reparations of £ 100 million.
Chile 's international trade collapsed and state income was reduced to half of its previous value after the start of the World War I in 1914. The Haber process, first applied on an industrial scale in 1913 and later used as part of Germany 's war effort due to its lack of access to Chilean saltpetre, ended Chile 's monopoly on nitrate and led to an economic decline in Chile. In addition to this the opening of Panama Canal in 1914 caused a severe drop in traffic along Chilean ports due to shifts in the maritime trade routes.
Conditions on the Continent were bad for every belligerent. Britain sustained the lightest damage to its civilian economy, apart from its loss of men. The major damage was to its merchant marine and to its financial holdings. The United States and Canada prospered during the war. The reparations levied on Germany by the Treaty of Versailles were, in theory, supposed to restore the damage to the civilian economies, but little of the reparations money went for that. Most of Germany 's reparations payments were funded by loans from American banks, and the recipients used them to pay off loans they had from the U.S. Treasury. Between 1919 and 1932, Germany paid out 19 billion goldmarks in reparations, and received 27 billion goldmarks in loans from New York bankers and others. These loans were eventually paid back by Germany after World War II.
|
where is the show who wants to be a millionaire | Who Wants to Be a Millionaire (U.S. game show) - wikipedia
Who Wants to Be a Millionaire (often informally called Millionaire) is an American television game show based on the same - titled British program and developed for the United States by Michael Davies. The show features a quiz competition in which contestants attempt to win a top prize of $1,000,000 by answering a series of multiple - choice questions of increasing difficulty (although, for a time, most of the questions were of random difficulty). The program has endured as one of the longest - running and most successful international variants in the Who Wants to Be a Millionaire? franchise.
The original U.S. version aired on ABC from August 16, 1999 to June 27, 2002, and was hosted by Regis Philbin. The daily syndicated version of the show began airing on September 16, 2002, and was hosted for eleven seasons by Meredith Vieira until May 31, 2013. Later hosts included Cedric the Entertainer in the 2013 -- 14 season, Terry Crews in the following season (2014 -- 15), and Chris Harrison, who began hosting on September 14, 2015.
As the first U.S. network game show to offer a million - dollar top prize, the show made television history by becoming one of the highest - rated game shows in the history of American television. The U.S. Millionaire has won seven Daytime Emmy Awards, and TV Guide ranked it No. 6 in its 2013 list of the 60 greatest game shows of all time.
At its core, the game is a quiz competition in which the goal is to correctly answer a series of fourteen (originally fifteen) consecutive multiple - choice questions. The questions are of increasing difficulty, except in the 2010 -- 15 format overhaul, where the contestants were faced with a round of ten questions of random difficulty, followed by a round of four questions of increasing difficulty. Each question is worth a specified amount of money; the amounts are cumulative in the first round, but not in the second. If the contestant gives a wrong answer to any question, their game is over and their winnings are reduced (or increased, in the first two questions) to $1,000 for tier - one questions, $5,000 for tier - two questions, and $50,000 for tier - three questions. However, the contestant has the option of "walking away '' without giving an answer after being presented with a question, in which case the game ends and the contestant is guaranteed to walk away with all the money they have previously received. With the exception of the shuffle format, upon correctly answering questions five and ten, contestants are guaranteed at least the amount of prize money associated with that level. If the contestant gives an incorrect answer, their winnings drop down to the last milestone achieved. Since 2015, if the contestant answers a question incorrectly before reaching question five, he or she leaves with $1,000, even on the first question that is worth only $500. For celebrities, the minimum guarantee for their nominated charities is $10,000. Prior to the shuffle format, a contestant left with nothing if they answered a question incorrectly before reaching the first milestone. In the shuffle format, contestants who incorrectly answered a question had their winnings reduced to $1,000 in round one and $25,000 in round two.
From 1999 -- 2002, 10 contestants would play a round of Fastest Finger to determine who would be the first to play in the hot seat. The contestants would be faced with a question and four answers, and they would have to put the four answers in the correct order (ascending, chronological etc.) in the fastest time. The contestant who keyed it in the fastest time correctly would play for one million dollars. If no one got it right, the round is played again until someone does, and if a tie breaker occurs the remaining contestants do one Fastest Finger question until one person does it quicker. This round was eliminated from the game when the syndicated version started, though it returned in 2004 for Super Millionaire and 2009 for the shows 10th Anniversary. The format remained unchanged, aside from alterations to the money ladder and the addition of a new lifeline, until 2008.
In 2008, the format was altered to include a time limit on each question. The amount of time for each question was as follows:
For the last question, the amount of time that was not used on the previous questions was banked and added onto the 45 seconds already allowed on the question, to give the contestant a better chance at winning the million.
The timer would count down the second the answer options appeared and the contestant would have to give their final answer within that length of time. If the contestant ran out of time at any point, the contestant would have to leave with the amount of money they had banked at the time. In addition the categories of the questions were displayed before the question was asked, titled as the ' Millionaire Menu '. In 2009, the money ladder was altered slightly.
The format was overhauled in September 2010 quite significantly. The game was now split into two rounds. In round one the contestant would face 10 questions, each assigned to a different amount of money that was randomized at the start of the game. So the difficulty of the question was not tied to the amount of money the question was worth, which was revealed after the question was either answered or jumped. The value of each question was added to the contestant 's bank, with a maximum amount that could be won at $68,600. Like usual the contestant could leave with their current winnings at any point, though if they did n't pass round one they would only receive half their banked money (ie. if a contestant won $30,000 and walked away they would only receive $15,000). The second round consisted of four questions presented in the traditional format, with the difficulty of the question tied to the amount of money it was worth. If the contestant gave a wrong answer in the first round they would go home with nothing, while if they made it to round 2, they would receive $25,000.
From 2011 -- 2014, some weeks were ' Double Your Money ' weeks, so if a contestant got a question right, its value would be doubled with a maximum jackpot of $93,600. However if a question was jumped, its value was not doubled.
With the hiring of new host Chris Harrison, the format was changed once again to resemble that of the original Millionaire. Each contestant faces 14 general - knowledge questions of increasing difficulty, with no time limit or information about the categories. As of 2017, a contestant who misses any of the first five questions leaves with $1,000, even if the missed question is of a lower value.
Five different ladders have been used over the course of the series.
The original primetime payment structure was also used for the first two seasons of the syndicated version (2002 -- 04). The third syndicated season in 2004 saw a reduction in the values for questions ten through twelve. In the eighth syndicated season (2009 -- 10), the lower question values were adjusted to raise the first safe haven to $5,000. When the "shuffle format '' was used, the first ten questions had random amounts from $100 to $25,000, as listed above. In addition, the number of questions needed to win the million was reduced to 14, removing the $50,000 level, the last four values remained the same for round two. The values were doubled during "Double Money Weeks ''. When the shuffle format ended at the start of the 2015 -- 16 season, the switch to 14 questions was retained, the first safe haven was kept at $5,000, but the second was raised to $50,000.
The $500,000 and $1,000,000 prizes were initially lump - sum payments, but were changed to annuities in September 2002 when the series moved to syndication. Contestants winning either of these prizes receive $250,000 thirty days after their show broadcasts and the remainder paid in equal annual payments. The $500,000 prize consists of $25,000 per year for 10 years, while the $1,000,000 prize consists of $37,500 per year for 20 years.
Since 2017, contestants who answer one of the first five questions incorrectly receive a $1,000 consolation prize; on the original primetime version and on the earlier seasons of the syndicated version prior to 2010, contestants who missed one of the first five questions left with no winnings.
Forms of assistance known as "lifelines '' are available for a contestant to use if a question proves difficult. Multiple lifelines may be used on a single question, but each one can only be used once per game (unless otherwise noted below). Three lifelines are available from the start of the game. Depending on the format of the show, additional lifelines may become available after the contestant correctly answers the fifth or tenth question. In the clock format, usage of lifelines temporarily paused the clock while the lifelines were played.
Over the course of the program 's history, 12 people have answered the final question correctly and walked away with the top prize. These include:
The original network version of the U.S. Millionaire and the subsequent primetime specials were hosted by Regis Philbin. When the syndicated version was being developed, the production team felt that it was not feasible for Philbin to continue hosting, as the show recorded four episodes in a single day, and that the team was looking for qualities in a new host: it had to be somebody who would love the contestants and be willing to root for them. Rosie O'Donnell was initially offered a hosting position on this new edition, but declined the opportunity almost immediately. Eventually Meredith Vieira, who had previously competed in a celebrity charity event on the original network version, was named host of the new syndicated edition.
ABC originally offered Vieira hosting duties on the syndicated Millionaire to sweeten one of her re-negotiations for the network 's daytime talk show The View, which she was moderating at the time. When the show was honored by GSN on its Gameshow Hall of Fame special, Vieira herself further explained her motivation for hosting the syndicated version as follows:
From 2007 to 2011, when Vieira was concurrently working as a co-host of Today, guest hosts appeared in the second half of each season of the syndicated version. Guest hosts who filled in for Vieira included Philbin, Al Roker, Tom Bergeron, Tim Vincent, Dave Price, Billy Bush, Leeza Gibbons, Cat Deeley, Samantha Harris, Shaun Robinson, Steve Harvey, John Henson, Sherri Shepherd, Tim Gunn, and D.L. Hughley.
On January 10, 2013, Vieira announced that after eleven seasons with the syndicated Millionaire, she would be leaving the show as part of an effort to focus on other projects in her career. She finalized taping of her last episodes with the show in November 2012. While Philbin briefly considered a return to the show, Cedric the Entertainer was introduced as her successor when season twelve premiered on September 2, 2013. On April 30, 2014, Deadline announced that Cedric had decided to leave the show in order to lighten his workload, resulting in him being succeeded by Terry Crews for the 2014 -- 15 season. Crews was succeeded by Chris Harrison, host of The Bachelor and its spin - offs, when season 14 premiered on September 14, 2015.
The original executive producers of the U.S. Millionaire were British television producers Michael Davies and Paul Smith, the latter of whom undertook the responsibility of licensing Millionaire to American airwaves as part of his effort to transform the UK program into a global franchise. Smith served until 2007 and Davies until 2010; additionally, Leigh Hampton (previously co-executive producer in the later days of the network version and in the syndicated version 's first two seasons) served as an executive producer from 2004 to 2010. Rich Sirop, who was previously a supervising producer, became the executive producer in 2010 and held that position until 2014, when he left Millionaire to hold the same position with Vieira 's newly launched syndicated talk show, and was replaced by James Rowley. Vincent Rubino, who had previously been the syndicated Millionaire 's supervising producer for its first two seasons, served as that version 's co-executive producer for the 2004 -- 05 season, after which he was succeeded by Vieira herself, who continued to hold the title until her departure in 2013 (sharing her position with Sirop for the 2009 -- 10 season).
Producers of the network version included Hampton, Rubino, Leslie Fuller, Nikki Webber, and Terrence McDonnell. For its first two seasons the syndicated version had Deirdre Cossman for its managing producer, then Dennis F. McMahon became producer for the next two seasons (joined by Dominique Bruballa as his line producer), after which Jennifer Weeks produced the next four seasons of syndicated Millionaire shows, initially accompanied by Amanda Zucker as her line producer, but later joined for the 2008 -- 09 season by Tommy Cody (who became sole producer in the 2009 -- 10 season). The first 65 shuffle format episodes were produced by McPaul Smith, and as of 2011, the title of producer is held by Bryan Lasseter. The network version had Ann Miller and Tiffany Trigg for its supervising producers; they were joined by Wendy Roth in the first two seasons, and by Michael Binkow in the third and final season. After Rubino 's promotion to co-executive producer, the syndicated version 's later supervising producers included Sirop (2004 -- 09), Geena Gintzig (2009 -- 10), Brent Burnette (2010 -- 12), Geoff Rosen (2012 -- 14), and Liz Harris (2014 -- 16), who currently serves as the co-executive producer.
The original network version of Millionaire was directed by Mark Gentile, who later served as the syndicated version 's consulting producer for its first two seasons and as the director of Duel, which ran on ABC from December 2007 to July 2008. The syndicated version was directed by Matthew Cohen from 2002 to 2010, by Rob George from 2010 to 2013, and by Brian McAloon in the 2013 -- 14 season. Former Price Is Right director Rich DiPirro became Millionaire 's director in 2014.
The U.S. version of Millionaire is a co-production of 2waytraffic, a division of Sony Pictures Entertainment, and Valleycrest Productions, a division of The Walt Disney Company. 2waytraffic purchased Millionaire 's original production company Celador in 2008, while Valleycrest has produced the series since its beginning, and holds the copyright on all U.S. Millionaire episodes to date. The show is distributed by Valleycrest 's corporate sibling Disney - ABC Home Entertainment & Television Distribution (previously known as Buena Vista Television).
The U.S. Millionaire was taped at ABC 's Television Center East studio on the Upper West Side of Manhattan in New York from 1999 to 2012. Tapings were moved to NEP Broadcasting 's Metropolis Studios in East Harlem in 2013, and production moved to studios located in Stamford, Connecticut the following year. For the 2016 -- 17 season, production relocated to Bally 's Hotel and Casino in Las Vegas, Nevada. Episodes of the syndicated version are produced from June to December. The show originally taped four episodes in a single day, but that number has since been changed to five.
When the U.S. version of Millionaire was first conceived in 1998, Michael Davies was a young television producer who was serving as the head of ABC 's little - noticed reality programming division (at a time when reality television had not yet become a phenomenon in America). At that time, ABC was lingering in third place in the ratings indexes among U.S. broadcast networks, and was on the verge of losing its status as one of the "Big Three '' networks. Meanwhile, the popularity of game shows was at an all - time low, and with the exception of The Price Is Right, the genre was absent from networks ' daytime lineups at that point. Having earlier created Debt for Lifetime Television and participated with Al Burton and Donnie Brainard in the creation of Win Ben Stein 's Money for Comedy Central, Davies decided to create a primetime game show that would save the network from collapse and revive interest in game shows.
Davies originally considered reviving CBS 's long - lost quiz show The $64,000 Question, with a new home on ABC. However, this effort 's development was limited as when the producer heard that the British Millionaire was about to make its debut, he got his friends and family members in the UK to record the show, and subsequently ended up receiving about eight FedEx packages from different family members, each containing a copy of Millionaire 's first episode. Davies was so captivated by everything that he had seen and heard, from host Chris Tarrant 's intimate involvement with the contestant to the show 's lighting system and music tracks, that he chose to abandon his work on the $64,000 Question revival in favor of introducing Millionaire to American airwaves, convinced that it would become extraordinarily popular.
When Davies presented his ideas for the U.S. Millionaire to ABC, the network 's executives initially rejected them, so he resigned his position there and became an independent producer. Determined to bring his idea for the show to fruition, Davies decided that he was betting his whole career on Millionaire 's production, and the first move that he made was planning to attach a celebrity host to the show. Along with Philbin, a number of other popular television personalities were considered for hosting positions on the U.S. Millionaire during its development, including Peter Jennings, Bob Costas, Phil Donahue, and Montel Williams, but among those considered, it was Philbin who wanted the job the most, and when he saw an episode of the British Millionaire and was blown away by his content, Davies and his team ultimately settled on having him host the American show. When Davies approached ABC again after having hired Philbin, the network finally agreed to accept the U.S. Millionaire. With production now ready to begin, the team had only five months to finish developing the show and get it launched, with Davies demanding perfection in every element of Millionaire 's production.
With few exceptions, any legal resident of the United States who is 18 years of age or older has the potential of becoming a contestant through Millionaire 's audition process. Those ineligible include employees, immediate family or household members, and close acquaintances of SPE, Disney, or any of their respective affiliates or subsidiaries; television stations that broadcast the syndicated version; or any advertising agency or other firm or entity engaged in the production, administration, or judging of the show. Also ineligible are current candidates for political office and individuals who have appeared on a different game show outside of cable that has been broadcast within the past year, is intended to be broadcast within the next year, or played the main game on any of the U.S. versions of Millionaire itself.
Potential contestants of the original primetime version had to compete in a telephone contest which had them dial a toll - free number and answer three questions by putting objects or events in order. Callers had ten seconds to enter the order on a keypad, with any incorrect answer ending the game / call. The 10,000 to 20,000 candidates who answered all three questions correctly were selected into a random drawing in which approximately 300 contestants competed for ten spots on the show using the same phone quiz method. Accommodations for contestants outside the New York City area included round trip airfare (or other transportation) and hotel accommodations.
The syndicated version 's potential contestants, depending on tryouts, are required to pass an electronically scored test comprising a set of thirty questions which must be answered within a 10 - minute time limit. Contestants who fail the test are eliminated, while those who pass are interviewed for an audition by the production staff, and those who impress the staff the most are then notified by postal mail that they have been placed into a pool for possible selection as contestants. At the producers ' discretion, contestants from said pool are selected to appear on actual episodes of the syndicated program; these contestants are given a phone call from staff and asked to confirm the information on their initial application form and verify that they meet all eligibility requirements. Afterwards, they are given a date to travel to the show 's taping facilities to participate in a scheduled episode of the show. Unlike its ABC counterpart, the syndicated version does not offer transportation or hotel accommodations to contestants at the production company 's expense; that version 's contestants are instead required to provide transportation and accommodations of their own.
The syndicated Millionaire also conducts open casting calls in various locations across the United States to search for potential contestants. These are held in late spring or early summer, with all dates and locations posted on the show 's official website. The producers make no guarantee on how many applicants will be tested at each particular venue; however, the show will not test any more than 2,500 individuals per audition day.
In cases when the show features themed episodes with two people playing as a team, auditions for these episodes ' contestants are announced on the show 's website. Both members of the team must pass the written test and the audition interview successfully in order to be considered for selection. If only one member of the team passes, he or she is placed into the contestant pool alone and must continue the audition process as an individual in order to proceed.
Originally, the U.S. Millionaire carried over the musical score from the British version, composed by father - and - son duo Keith and Matthew Strachan. Unlike older game show musical scores, Millionaire 's musical score was created to feature music playing almost throughout the entire show. The Strachans ' main Millionaire theme song took some inspiration from the "Mars '' movement of Gustav Holst 's The Planets, and their question cues from the $2,000 to the $32,000 / $25,000 level, and then from the $64,000 / $50,000 level onwards, took the pitch up a semitone for each subsequent question, in order to increase tension as the contestant progressed through the game. On GSN 's Gameshow Hall of Fame special, the narrator described the Strachan tracks as "mimicking the sound of a beating heart, '' and stated that as the contestant worked their way up the money ladder, the music was "perfectly in tune with their ever - increasing pulse. ''
The original Millionaire musical score holds the distinction of being the only game show soundtrack to be acknowledged by the American Society of Composers, Authors and Publishers, as the Strachans were honored with numerous ASCAP awards for their work, the earliest of them awarded in 2000. The original music cues were given minor rearrangements for the clock format in 2008; for example, the question cues were synced to the "ticking '' sounds of the game clock. Even later, the Strachan score was removed from the U.S. version altogether for the introduction of the shuffle format in 2010, in favor of a new musical score with cues written by Jeff Lippencott and Mark T. Williams, co-founders of the Los Angeles - based company Ah2 Music.
The U.S. Millionaire 's basic set is a direct adaptation of the British version 's set design, which was conceived by Andy Walmsley. Paul Smith 's original licensing agreement for the U.S. Millionaire required that the show 's set design, along with all other elements of the show 's on - air presentation (musical score, lighting system, host 's wardrobe, etc.), adhere faithfully to the way in which they were presented in the British version; this same licensing agreement applied to all other international versions of the show, making Walmsley 's Millionaire set design the most reproduced scenic design in television history. The original version of the U.S. Millionaire 's set cost $200,000 to construct. The U.S. Millionaire 's production design is handled by George Allison, whose predecessors have included David Weller and Jim Fenhagen.
Unlike older game shows whose sets are or were designed to make the contestant (s) feel at ease, Millionaire 's set was designed to make the contestant feel uncomfortable, so that the program feels more like a movie thriller than a typical quiz show. The floor is made of Plexiglas beneath which lies a huge dish covered in mirror paper. Before the shuffle format was implemented in 2010, the main game had the contestant and host sit in chairs in the center of the stage, known as "Hot Seats ''; these measured 3 feet (0.91 m) high, were modeled after chairs typically found in hair salons, and each seat featured a computer monitor directly facing it to display questions and other pertinent information. Shortly after the shuffle format was introduced to Millionaire, Vieira stated in an interview with her Millionaire predecessor on his morning talk show that the Hot Seat was removed because it was decided that the seat, which was originally intended to make the contestant feel nervous, actually ended up having contestants feel so comfortable in it that it did not service the production team any longer.
The lighting system is programmed to darken the set as the contestant progresses further into the game. There are also spotlights situated at the bottom of the set area that zoom down on the contestant when they answer a major question; to increase the visibility of the light beams emitted by such spotlights, oil is vaporized, creating a haze effect. Media scholar Dr. Robert Thompson, a professor at Syracuse University, stated that the show 's lighting system made the contestant feel as though they were outside of prison when an escape was in progress.
When the shuffle format was introduced, the Hot Seats and corresponding monitors were replaced with a single podium, and as a result, the contestant and host stand throughout the game and are also able to walk around the stage. Also, two video screens were installed -- one that displays the current question in play, and another that displays the contestant 's cumulative total and progress during the game. In September 2012, the redesigned set was improved with a modernized look and feel, in order to take into account the show 's transition to high - definition broadcasting, which had just come about the previous year. The two video screens were replaced with two larger ones, having twice as many projectors as the previous screens had; the previous contestant podium was replaced with a new one; and light - emitting diode (LED) technology was integrated into the lighting system to give the lights more vivid colors and the set and gameplay experience a more intimate feel.
The U.S. version of Millionaire was launched by ABC as a half - hour primetime program on August 16, 1999. When it premiered, it became the first U.S. network game show to offer a million - dollar top prize to contestants. After airing thirteen episodes and reaching an audience of 15 million viewers by the end of the show 's first week on the air, the program expanded to an hour - long format when it returned in November. The series, of which episodes were originally shown only a day after their initial taping, was promoted to regular status on January 18, 2000 and, at the height of its popularity, was airing on ABC five nights a week. The show was so popular during its original primetime run that rival networks created or re-incarnated game shows of their own (e.g., Greed, Twenty One, etc.), as well as importing various game shows of British and Australian origin to America (such as Winning Lines, Weakest Link, and It 's Your Chance of a Lifetime).
The nighttime version initially drew in up to 30 million viewers a day three times a week, an unheard - of number in modern network television. In the 1999 -- 2000 season, it averaged No. 1 in the ratings against all other television shows, with 28,848,000 viewers. In the next season (2000 -- 01), three nights out of the five weekly episodes placed in the top 10. However, the show 's ratings began to fall during the 2000 -- 01 season, so that at the start of the 2001 -- 02 season, the ratings were only a fraction of what they had been one year before, and by season 's end, the show was no longer even ranked among the top 20. ABC 's reliance on the show 's popularity led the network to fall quickly from its former spot as the nation 's most watched network.
As ABC 's overexposure of the primetime Millionaire led the public to tire of the show, there was speculation that the show would not survive beyond the 2001 -- 02 season. The staff planned on switching it to a format that would emphasize comedy more than the game and feature a host other than Philbin, but in the end, the primetime show was canceled, with its final episode airing on June 27, 2002.
In 2001, Millionaire producers began work on a half - hour daily syndicated version of the show, with the idea being that it would serve as an accompaniment to the network series which was still in production. ABC 's cancellation of the network Millionaire ended that idea; however, the syndicated Millionaire still had enough interest to be greenlit and BVT sold the series to local stations for the 2002 -- 03 season. The syndicated series nearly met the same fate as its predecessor, however, due in part to worries that stemmed from a decision made by one of its affiliates.
In the New York media market, BVT sold the syndicated Millionaire to CBS 's flagship station, WCBS - TV. In the season that had passed, WCBS ' mid-afternoon schedule included the syndicated edition of NBC 's Weakest Link, which aired at 4 pm from its January 2002 premiere. Joining Millionaire as a new syndicated series was a spinoff of The Oprah Winfrey Show hosted by Dr. Phil McGraw. WCBS picked up both series for 2002 -- 03, with Dr. Phil serving as lead - in for the syndicated Millionaire, which was plugged into the time slot that Weakest Link had been occupying.
At mid-season, WCBS announced that for the 2003 -- 04 season it had acquired the broadcast rights to The People 's Court after WNBC, which had been airing the revived series since its 1997 debut, dropped it from its lineup. WCBS announced plans to move The People 's Court into the time slot that was occupied by Millionaire and the still - airing 4: 30 pm local newscast once it joined the station 's lineup in September 2003. This led to speculation that the syndicated Millionaire would not be returning for a second season, and BVT 's concerns over losing its New York affiliate were compounded by the fact that there were not many time slots available for the show in New York outside of the undesirable late - night slots that syndicators try to avoid.
In June 2003, a shakeup at one of BVT 's corporate siblings provided the series with an opening. ABC announced that it would be returning the 12: 30 pm network time slot to its affiliates in October of that year following the cancellation of the soap opera Port Charles. ABC 's flagship, WABC - TV, was thus in need of a program to fill the slot and BVT went to them asking if the station would pick up Millionaire. WABC agreed to do this and when the new season launched that fall, the station began airing Millionaire at 12: 30 pm. Millionaire continued to air on WABC in the afternoon until the end of the 2014 -- 15 season, when it acquired the broadcast rights to FABLife for the 2015 -- 16 season. To make room for FABLife, the afternoon airing of Millionaire was moved to independent station WLNY - TV; however, FABLife was canceled in January 2016, and as a result, Millionaire moved back to WABC for the 2016 -- 17 season.
According to e-mails released in the Sony Pictures Entertainment hack, Millionaire narrowly avoided cancellation after the 2014 -- 15 season. The show 's declining ratings prompted DADT to demand a dramatically reduced licensing fee for renewal, which SPE was hesitant to accept. The series was nonetheless renewed for the 2015 -- 16 season, with various cuts to the show 's production budget and a return to the original format (but with only 14 questions). Had the show not been renewed, SPE would have placed the show on extended hiatus for three years, reclaimed full rights to the show (without the innovations and format added in the syndicated run, to which DADT owns intellectual property rights), and shopped the revived show to another network or syndicator. On January 17, 2017, it was announced that Millionaire has been renewed through 2018.
Millionaire was subsequently renewed through the 2018 -- 19 season on January 17, 2018.
Game Show Network (GSN) acquired the rerun rights to the U.S. Millionaire in August 2003. The network initially aired only episodes from the three seasons of the original prime - time run; however, additional episodes were later added. These included the Super Millionaire spin - off, which aired on GSN from May 2005 to January 2007, and the first two seasons of the syndicated version, which began airing on November 10, 2008. On December 4, 2017, GSN acquired the rerun rights to the Harrison episodes of Millionaire (seasons fourteen and fifteen), which began airing December 18, 2017.
Various special editions and tournaments have been conducted which feature celebrities playing the game and donating winnings to charities of their choice. During celebrity editions on the original ABC version, contestants were allowed to receive help from their fellow contestants during the first ten questions. The most successful celebrity contestants throughout the show 's run have included Drew Carey, Rosie O'Donnell, Norm Macdonald, and Chip Esten, all of whom won $500,000 for their respective charities. The episode featuring O'Donnell's $500,000 win averaged 36.1 million viewers, the highest number for a single episode of the show.
There have also been special weeks featuring two or three family members or couples competing as a team, a "Champions Edition '' where former big winners returned and split their winnings with their favorite charities, a "Zero Dollar Winner Edition '' featuring contestants who previously missed one of the first - tier questions and left with nothing, and a "Tax - Free Edition '' in which H&R Block calculated the taxes of winnings to allow contestants to earn stated winnings after taxes, and various theme weeks featuring college students, teachers, brides - to - be, etc. as contestants. Additionally, the syndicated version once featured an annual "Walk In & Win Week '' with contestants who were randomly selected from the audience without having to take the audition test.
Special weeks have also included shows featuring questions concerning specific topics, such as professional football, celebrity gossip, movies, and pop culture. During a week of episodes in November 2007, to celebrate the 1,000 th episode of the syndicated Millionaire, all contestants that week started with $1,000 so that they could not leave empty - handed, and only had to answer ten questions to win $1,000,000. During that week, twenty home viewers per day also won $1,000 each.
In 2004, Philbin returned to host 12 episodes of a spin - off program titled Who Wants to Be a Super Millionaire in which contestants could potentially win $10,000,000. ABC aired five episodes of this spin - off during the week of February 22, 2004, and an additional seven episodes later that year in May. As usual, contestants had to answer a series of 15 multiple - choice questions of increasing difficulty, but the dollar values rose substantially. The questions for Super Millionaire were worth $1,000, $2,000, $3,000, $4,000, $5,000 (the first safe haven), $10,000, $20,000, $30,000, $50,000, $100,000 (the second safe haven), $500,000, $1,000,000, $2,500,000, $5,000,000, and $10,000,000.
Contestants were given the standard three lifelines in place at the time (50: 50, Ask the Audience, and Phone - a-Friend) at the beginning of the game. However, after correctly answering the $100,000 question, the contestant earned two additional lifelines: Three Wise Men and Double Dip. The Three Wise Men lifeline involved a panel of three experts, one of whom was always a former Millionaire contestant and at least one of whom was female. When this lifeline was used, the contestant and panel had 30 seconds to discuss the question and choices before the audio and video feeds were dropped. Double Dip gave a contestant two chances to answer a question. Once used, the contestant must answer the question without using any further lifelines; moreover, if the "first final answer '' was incorrect, the contestant could not walk away. If the "second final answer '' was also wrong, the contestant left with $100,000.
To celebrate the tenth anniversary of Millionaire 's U.S. debut, the show returned to ABC primetime for an eleven - night event hosted by Philbin, which aired August 9 to 23, 2009. The Academy Award - winning movie Slumdog Millionaire and the 2008 economic crisis helped boost interest of renewal of the game show.
The episodes featured game play based on the previous rule set of the syndicated version (including the rule changes implemented in season seven) but used the Fastest Finger round to select contestants. Various celebrities also made special guest appearances at the end of every episode; each guest played one question for a chance at $50,000 for a charity of their choice, being allowed to use any one of the four lifelines in place at the time (Phone - a-Friend, Ask the Audience, Double Dip, and Ask the Expert), but still earned a minimum of $25,000 for the charity if they answered the question incorrectly.
On August 18, 2009, New York City resident Nik Bonaddio appeared on the program, winning $100,000 with the help of the audience and later, Gwen Ifill as his lifelines. Bonaddio then used the proceeds to start the sports analytics firm numberFire, which was sold in September 2015 to FanDuel, a fantasy sports platform.
The finale of the tenth anniversary special, which aired on August 23, 2009, featured Ken Basin, an entertainment lawyer from Los Angeles, CA., who went on to become the first contestant to play a $1,000,000 question in the "clock format ''. With a time of 4: 39 (45 seconds + 3: 54 banked time), Basin was given a question involving President Lyndon Baines Johnson 's fondness for Fresca. Using his one remaining lifeline, Basin asked the audience, which supported his own hunch of Yoo - hoo rather than the correct answer. He decided to answer the question and lost $475,000, becoming the first contestant in the U.S. version to answer a $1,000,000 question incorrectly. After Basin finished his run, Vieira appeared on - camera and announced that all remaining Fastest Finger contestants would play with her on the first week of the syndicated version 's eighth season. After this, the million dollar question was not played again on a standard episode until September 25, 2013, when Josina Reaves became the second U.S. Millionaire contestant to incorrectly answer her $1,000,000 question, but only lost $75,000 as she used her Jump the Question lifelines on her $250,000 and $500,000 questions.
Although the syndicated Millionaire had produced two millionaires in its first season, Nancy Christy 's May 2003 win was still standing as the most recent when the program began its eighth season in fall of 2009. Deciding that six - plus years had been too long since someone had won the top prize, producers conducted a tournament to find a third million dollar winner. For the first nine weeks of the 2009 -- 10 season, each episode saw contestants attempt to qualify for what was referred to as the "Tournament of Ten ''. Contestants were seeded based on how much money they had won, with the biggest winner ranked first and the lowest ranked tenth. Ties were broken based on how much time a contestant had banked when they had walked away from the game.
The tournament began on the episode aired November 9, 2009, and playing in order from the lowest to the highest seed, tournament contestants played one at a time at the end of that episode and the next nine. The rules were exactly the same as they were for a normal million dollar question under the clock format introduced the season before, except here, the contestants had no lifelines at their disposal. Each contestant received a base time of 45 seconds. For each question they had answered before walking away, the contestants received any unused seconds that were left when they gave their answers. The accumulated total of those unused seconds was then added to the base time to give the contestants their final question time limit.
Each contestant had the same decision facing them as before, which was whether to attempt to answer the question or walk away with their pre-tournament total intact. Attempting the question and answering incorrectly incurred the same penalty as in regular play, with a reduction of their pre-tournament winnings to $25,000. If the question was answered correctly, the player that did so became the tournament leader. If another player after him / her answered correctly, that player assumed the lead and the previous leader kept their pre-tournament winnings. The highest remaining seed to have attempted and correctly answered their question at the end of the tournament on November 20, 2009 would be declared the winner and become the syndicated series ' third millionaire.
The first contestant to attempt to answer the million dollar question was Sam Murray, the tournament 's eighth - seeded qualifier. On November 11, Murray was asked approximately how many people had lived on Earth in its history and correctly guessed 100 billion. Murray was still atop the leaderboard entering the November 20 finale as he remained the only contestant to even attempt to answer his or her question. The only person who could defeat him was top seed and $250,000 winner Jehan Shamsid - Deen, who was asked a question regarding the Blorenge, cited as "a rare example of a word that rhymes with orange ''. Shamsid - Deen considered taking the risk, believing (correctly) that the name belonged to a mountain in Wales. However, she decided that the potential of losing $225,000 did not justify the risk and elected to walk away from the question, giving Murray the win and the million dollar prize.
Since its introduction to the United States, GSN credited Who Wants to Be a Millionaire with not only single - handedly reviving the game show genre, but also breaking new ground for it. The series revolutionized the look and feel of game shows with its unique lighting system, dramatic music cues, and futuristic set. The show also became one of the highest - rated and most popular game shows in U.S. television history, and has been credited with paving the way for the rise of the primetime reality TV phenomenon to prominence throughout the 2000s.
The U.S. Millionaire also made catchphrases out of various lines used on the show. In particular, "Is that your final answer? '', asked by Millionaire 's hosts whenever a contestant 's answer needs to be verified, was popularized by Philbin during his tenure as host, and was also included on TV Land 's special "100 Greatest TV Quotes and Catch Phrases '', which aired in 2006. Meanwhile, during his tenure as host, Cedric signed off shows with a catchphrase of his own, "Watch yo ' wallet! ''
The original primetime version of the U.S. Millionaire won two Daytime Emmy Awards for Outstanding Game / Audience Participation Show in 2000 and 2001. Philbin was honored with a Daytime Emmy in the category of Outstanding Game Show Host in 2001, while Vieira received one in 2005, and another in 2009. TV Guide ranked the U.S. Millionaire # 7 on its 2001 list of the 50 Greatest Game Shows of All Time, and later ranked it # 6 on its 2013 "60 Greatest Game Shows '' list. GSN ranked Millionaire # 5 on its August 2006 list of the 50 Greatest Game Shows of All Time, and later honored the show in January 2007 on its only Gameshow Hall of Fame special.
In 2000, Pressman released two board game adaptions of Millionaire as well as a junior edition recommended for younger players. Several video games based on the varying gameplay formats of Millionaire have also been released throughout the course of the show 's U.S. history.
Between 1999 and 2001, Jellyvision produced five video game adaptations based upon the original primetime series for personal computers and Sony 's PlayStation console, all of them featuring Philbin 's likeness and voice. The first of these adaptations was published by Disney Interactive, while the later four were published by Buena Vista Interactive which had just been spun off from DI when it reestablished itself in attempts to diversify its portfolio. Of the five games, three featured general trivia questions, one was sports - themed, and another was a "Kids Edition '' featuring easier questions. In 2008, Imagination Games released a DVD version of the show, based on the 2004 -- 08 format and coming complete with Vieira 's likeness and voice, as well as a quiz book and a 2009 desktop calendar. Additionally, two Millionaire video games were released by Ludia in conjunction with Ubisoft in 2010 and 2011; the first of these was a game for Nintendo 's Wii console and DS handheld system based on the clock format, while the second, for Microsoft 's Xbox 360, was based on the shuffle format.
Ludia has also created a Facebook game based on Millionaire, which debuted on March 21, 2011. This game features an altered version of the shuffle format, condensing the number of questions to twelve -- eight in round one, and four in round two. A contestant can compete against eight other Millionaire fans in round one, and play round two alone if they make it into the top three. There is no "final answer '' rule; the contestant 's responses are automatically locked in. Answering a question correctly earns a contestant the value of that question, multiplied by the number of people who responded incorrectly. Contestants are allowed to use two of their Facebook friends as Jump the Question lifelines in round one, and to use the Ask the Audience lifeline in round two to invite up to 50 such friends of theirs to answer a question for a portion of the prize money of the current question.
Who Wants to Be a Millionaire -- Play It! was an attraction at the Disney 's Hollywood Studios theme park (when it was known as Disney - MGM Studios) at the Walt Disney World Resort in Orlando, Florida and at Disney California Adventure Park in Anaheim, California. Both the Florida and California Play It! attractions opened in 2001; the California version closed in 2004, and the Florida version closed in 2006 and was replaced by Toy Story Midway Mania!
The format in the Play It! attraction was very similar to that of the television show that inspired it. When a show started, a Fastest Finger question was given, and the audience was asked to put the four answers in order; the person with the fastest time was the first contestant in the Hot Seat for that show. However, the main game had some differences: for example, contestants competed for points rather than dollars, the questions were set to time limits, and the Phone - a-Friend lifeline became Phone a Complete Stranger which connected the contestant to a Disney cast member outside the attraction 's theater who would find a guest to help. After the contestant 's game was over, they were awarded anything from a collectible pin, to clothing, to a Millionaire CD game, to a 3 - night Disney Cruise.
This variant of Millionaire is directly mentioned in Steven Curtis Chapman 's album Declaration. In "Live Out Loud '', Chapman imagines a scenario where Regis Philbin invites him to compete in the show, and he wins the million - dollar prize "with two lifelines to spare ''.
|
in which ocean is the location 10 degrees s latitude 75 degrees e longitude located | 75th meridian east - wikipedia
The meridian 75 ° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Indian Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 75th meridian east forms a great circle with the 105th meridian west.
Starting at the North Pole and heading south to the South Pole, the 75th meridian east passes through:
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.